Sample records for space mapping optimization

  1. Optimal Mass Transport for Shape Matching and Comparison

    PubMed Central

    Su, Zhengyu; Wang, Yalin; Shi, Rui; Zeng, Wei; Sun, Jian; Luo, Feng; Gu, Xianfeng

    2015-01-01

    Surface based 3D shape analysis plays a fundamental role in computer vision and medical imaging. This work proposes to use optimal mass transport map for shape matching and comparison, focusing on two important applications including surface registration and shape space. The computation of the optimal mass transport map is based on Monge-Brenier theory, in comparison to the conventional method based on Monge-Kantorovich theory, this method significantly improves the efficiency by reducing computational complexity from O(n2) to O(n). For surface registration problem, one commonly used approach is to use conformal map to convert the shapes into some canonical space. Although conformal mappings have small angle distortions, they may introduce large area distortions which are likely to cause numerical instability thus resulting failures of shape analysis. This work proposes to compose the conformal map with the optimal mass transport map to get the unique area-preserving map, which is intrinsic to the Riemannian metric, unique, and diffeomorphic. For shape space study, this work introduces a novel Riemannian framework, Conformal Wasserstein Shape Space, by combing conformal geometry and optimal mass transport theory. In our work, all metric surfaces with the disk topology are mapped to the unit planar disk by a conformal mapping, which pushes the area element on the surface to a probability measure on the disk. The optimal mass transport provides a map from the shape space of all topological disks with metrics to the Wasserstein space of the disk and the pullback Wasserstein metric equips the shape space with a Riemannian metric. We validate our work by numerous experiments and comparisons with prior approaches and the experimental results demonstrate the efficiency and efficacy of our proposed approach. PMID:26440265

  2. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  3. Riemannian Metric Optimization on Surfaces (RMOS) for Intrinsic Brain Mapping in the Laplace-Beltrami Embedding Space

    PubMed Central

    Gahm, Jin Kyu; Shi, Yonggang

    2018-01-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer’s disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. PMID:29574399

  4. Space mapping method for the design of passive shields

    NASA Astrophysics Data System (ADS)

    Sergeant, Peter; Dupré, Luc; Melkebeek, Jan

    2006-04-01

    The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.

  5. Growing a hypercubical output space in a self-organizing feature map.

    PubMed

    Bauer, H U; Villmann, T

    1997-01-01

    Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.

  6. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  7. Riemannian metric optimization on surfaces (RMOS) for intrinsic brain mapping in the Laplace-Beltrami embedding space.

    PubMed

    Gahm, Jin Kyu; Shi, Yonggang

    2018-05-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Metric Optimization for Surface Analysis in the Laplace-Beltrami Embedding Space

    PubMed Central

    Lai, Rongjie; Wang, Danny J.J.; Pelletier, Daniel; Mohr, David; Sicotte, Nancy; Toga, Arthur W.

    2014-01-01

    In this paper we present a novel approach for the intrinsic mapping of anatomical surfaces and its application in brain mapping research. Using the Laplace-Beltrami eigen-system, we represent each surface with an isometry invariant embedding in a high dimensional space. The key idea in our system is that we realize surface deformation in the embedding space via the iterative optimization of a conformal metric without explicitly perturbing the surface or its embedding. By minimizing a distance measure in the embedding space with metric optimization, our method generates a conformal map directly between surfaces with highly uniform metric distortion and the ability of aligning salient geometric features. Besides pairwise surface maps, we also extend the metric optimization approach for group-wise atlas construction and multi-atlas cortical label fusion. In experimental results, we demonstrate the robustness and generality of our method by applying it to map both cortical and hippocampal surfaces in population studies. For cortical labeling, our method achieves excellent performance in a cross-validation experiment with 40 manually labeled surfaces, and successfully models localized brain development in a pediatric study of 80 subjects. For hippocampal mapping, our method produces much more significant results than two popular tools on a multiple sclerosis study of 109 subjects. PMID:24686245

  9. Using Neural Networks in the Mapping of Mixed Discrete/Continuous Design Spaces With Application to Structural Design

    DTIC Science & Technology

    1994-02-01

    desired that the problem to which the design space mapping techniques were applied be easily analyzed, yet provide a design space with realistic complexity...consistent fully stressed solution. 3 DESIGN SPACE MAPPING In order to reduce the computational expense required to optimize design spaces, neural networks...employed in this study. Some of the issues involved in using neural networks to do design space mapping are how to configure the neural network, how much

  10. Optimal slew path planning for the Sino-French Space-based multiband astronomical Variable Objects Monitor mission

    NASA Astrophysics Data System (ADS)

    She, Yuchen; Li, Shuang

    2018-01-01

    The planning algorithm to calculate a satellite's optimal slew trajectory with a given keep-out constraint is proposed. An energy-optimal formulation is proposed for the Space-based multiband astronomical Variable Objects Monitor Mission Analysis and Planning (MAP) system. The innovative point of the proposed planning algorithm lies in that the satellite structure and control limitation are not considered as optimization constraints but are formulated into the cost function. This modification is able to relieve the burden of the optimizer and increases the optimization efficiency, which is the major challenge for designing the MAP system. Mathematical analysis is given to prove that there is a proportional mapping between the formulation and the satellite controller output. Simulations with different scenarios are given to demonstrate the efficiency of the developed algorithm.

  11. Computer-Aided Design and Optimization of High-Performance Vacuum Electronic Devices

    DTIC Science & Technology

    2006-08-15

    approximations to the metric, and space mapping wherein low-accuracy (coarse mesh) solutions can potentially be used more effectively in an...interface and algorithm development. • Work on space - mapping or related methods for utilizing models of varying levels of approximation within an

  12. Fundamental Studies on Crashworthiness Design with Uncertainties in the System

    DTIC Science & Technology

    2005-01-01

    studied; examples include using the Response Surface Methods (RSM) and Design of Experiment (DOE) [2-4]. Space Mapping (SM) is another practical...Exposed to Impact Load Using a Space Mapping Technique,” Struct. Multidisc. Optim., Vol. 27, pp. 411-420 (2004). 6. Mayer, R. R., Kikuchi, N. and Scott

  13. Fundamental Studies on Crashworthiness Design with Uncertainties in the System

    DTIC Science & Technology

    2005-01-01

    studied; examples include using the Response Surface Methods (RSM) and Design of Experiment (DOE) [2-4]. Space Mapping (SM) is another practical...to Impact Load Using a Space Mapping Technique," Struct. Multidisc. Optim., Vol. 27, pp. 411-420 (2004). 6. Mayer, R. R., Kikuchi, N. and Scott, R

  14. Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.

    PubMed

    Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue

    2015-08-20

    Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.

  15. Digi Island: A Serious Game for Teaching and Learning Digital Circuit Optimization

    NASA Technical Reports Server (NTRS)

    Harper, Michael; Miller, Joseph; Shen, Yuzhong

    2011-01-01

    Karnaugh maps, also known as K-maps, are a tool used to optimize or simplify digital logic circuits. A K-map is a graphical display of a logic circuit. K-map optimization is essentially the process of finding a minimum number of maximal aggregations of K-map cells. with values of 1 according to a set of rules. The Digi Island is a serious game designed for aiding students to learn K-map optimization. The game takes place on an exotic island (called Digi Island) in the Pacific Ocean . The player is an adventurer to the Digi Island and will transform it into a tourist attraction by developing real estates, such as amusement parks.and hotels. The Digi Island game elegantly converts boring 1s and Os in digital circuits into usable and unusable spaces on a beautiful island and transforms K-map optimization into real estate development, an activity with which many students are familiar and also interested in. This paper discusses the design, development, and some preliminary results of the Digi Island game.

  16. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    PubMed

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  17. The application and use of chemical space mapping to interpret crystallization screening results

    PubMed Central

    Snell, Edward H.; Nagel, Ray M.; Wojtaszcyk, Ann; O’Neill, Hugh; Wolfley, Jennifer L.; Luft, Joseph R.

    2008-01-01

    Macromolecular crystallization screening is an empirical process. It often begins by setting up experiments with a number of chemically diverse cocktails designed to sample chemical space known to promote crystallization. Where a potential crystal is seen a refined screen is set up, optimizing around that condition. By using an incomplete factorial sampling of chemical space to formulate the cocktails and presenting the results graphically, it is possible to readily identify trends relevant to crystallization, coarsely sample the phase diagram and help guide the optimization process. In this paper, chemical space mapping is applied to both single macromolecules and to a diverse set of macromolecules in order to illustrate how visual information is more readily understood and assimilated than the same information presented textually. PMID:19018100

  18. The application and use of chemical space mapping to interpret crystallization screening results.

    PubMed

    Snell, Edward H; Nagel, Ray M; Wojtaszcyk, Ann; O'Neill, Hugh; Wolfley, Jennifer L; Luft, Joseph R

    2008-12-01

    Macromolecular crystallization screening is an empirical process. It often begins by setting up experiments with a number of chemically diverse cocktails designed to sample chemical space known to promote crystallization. Where a potential crystal is seen a refined screen is set up, optimizing around that condition. By using an incomplete factorial sampling of chemical space to formulate the cocktails and presenting the results graphically, it is possible to readily identify trends relevant to crystallization, coarsely sample the phase diagram and help guide the optimization process. In this paper, chemical space mapping is applied to both single macromolecules and to a diverse set of macromolecules in order to illustrate how visual information is more readily understood and assimilated than the same information presented textually.

  19. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  20. An atomic model of brome mosaic virus using direct electron detection and real-space optimization.

    PubMed

    Wang, Zhao; Hryc, Corey F; Bammes, Benjamin; Afonine, Pavel V; Jakana, Joanita; Chen, Dong-Hua; Liu, Xiangan; Baker, Matthew L; Kao, Cheng; Ludtke, Steven J; Schmid, Michael F; Adams, Paul D; Chiu, Wah

    2014-09-04

    Advances in electron cryo-microscopy have enabled structure determination of macromolecules at near-atomic resolution. However, structure determination, even using de novo methods, remains susceptible to model bias and overfitting. Here we describe a complete workflow for data acquisition, image processing, all-atom modelling and validation of brome mosaic virus, an RNA virus. Data were collected with a direct electron detector in integrating mode and an exposure beyond the traditional radiation damage limit. The final density map has a resolution of 3.8 Å as assessed by two independent data sets and maps. We used the map to derive an all-atom model with a newly implemented real-space optimization protocol. The validity of the model was verified by its match with the density map and a previous model from X-ray crystallography, as well as the internal consistency of models from independent maps. This study demonstrates a practical approach to obtain a rigorously validated atomic resolution electron cryo-microscopy structure.

  1. Efficient search, mapping, and optimization of multi-protein genetic systems in diverse bacteria

    PubMed Central

    Farasat, Iman; Kushwaha, Manish; Collens, Jason; Easterbrook, Michael; Guido, Matthew; Salis, Howard M

    2014-01-01

    Developing predictive models of multi-protein genetic systems to understand and optimize their behavior remains a combinatorial challenge, particularly when measurement throughput is limited. We developed a computational approach to build predictive models and identify optimal sequences and expression levels, while circumventing combinatorial explosion. Maximally informative genetic system variants were first designed by the RBS Library Calculator, an algorithm to design sequences for efficiently searching a multi-protein expression space across a > 10,000-fold range with tailored search parameters and well-predicted translation rates. We validated the algorithm's predictions by characterizing 646 genetic system variants, encoded in plasmids and genomes, expressed in six gram-positive and gram-negative bacterial hosts. We then combined the search algorithm with system-level kinetic modeling, requiring the construction and characterization of 73 variants to build a sequence-expression-activity map (SEAMAP) for a biosynthesis pathway. Using model predictions, we designed and characterized 47 additional pathway variants to navigate its activity space, find optimal expression regions with desired activity response curves, and relieve rate-limiting steps in metabolism. Creating sequence-expression-activity maps accelerates the optimization of many protein systems and allows previous measurements to quantitatively inform future designs. PMID:24952589

  2. Surrogate Structures for Computationally Expensive Optimization Problems With CPU-Time Correlated Functions

    DTIC Science & Technology

    2007-06-01

    xc)−∇2g(x̃c)](x− xc). The second transformation is a space mapping function P that handles the change in variable dimensions (see Bandler et al. [11...17(2):188–217, 2004. 11. Bandler, J. W., Q. Cheng, S. Dakroury, A. S. Mohamed, M.H. Bakr, K. Madsen, J. Søndergaard. “ Space Mapping : The State of

  3. Optimal stimulus scheduling for active estimation of evoked brain networks.

    PubMed

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  4. Optimal stimulus scheduling for active estimation of evoked brain networks

    NASA Astrophysics Data System (ADS)

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  5. Optimization of Brain T2 Mapping Using Standard CPMG Sequence In A Clinical Scanner

    NASA Astrophysics Data System (ADS)

    Hnilicová, P.; Bittšanský, M.; Dobrota, D.

    2014-04-01

    In magnetic resonance imaging, transverse relaxation time (T2) mapping is a useful quantitative tool enabling enhanced diagnostics of many brain pathologies. The aim of our study was to test the influence of different sequence parameters on calculated T2 values, including multi-slice measurements, slice position, interslice gap, echo spacing, and pulse duration. Measurements were performed using standard multi-slice multi-echo CPMG imaging sequence on a 1.5 Tesla routine whole body MR scanner. We used multiple phantoms with different agarose concentrations (0 % to 4 %) and verified the results on a healthy volunteer. It appeared that neither the pulse duration, the size of interslice gap nor the slice shift had any impact on the T2. The measurement accuracy was increased with shorter echo spacing. Standard multi-slice multi-echo CPMG protocol with the shortest echo spacing, also the smallest available interslice gap (100 % of slice thickness) and shorter pulse duration was found to be optimal and reliable for calculating T2 maps in the human brain.

  6. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  7. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.

    PubMed

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.

  8. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis

    PubMed Central

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691

  9. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    PubMed

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  10. MAPPING THE GALAXY COLOR–REDSHIFT RELATION: OPTIMAL PHOTOMETRIC REDSHIFT CALIBRATION STRATEGIES FOR COSMOLOGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masters, Daniel; Steinhardt, Charles; Faisst, Andreas

    2015-11-01

    Calibrating the photometric redshifts of ≳10{sup 9} galaxies for upcoming weak lensing cosmology experiments is a major challenge for the astrophysics community. The path to obtaining the required spectroscopic redshifts for training and calibration is daunting, given the anticipated depths of the surveys and the difficulty in obtaining secure redshifts for some faint galaxy populations. Here we present an analysis of the problem based on the self-organizing map, a method of mapping the distribution of data in a high-dimensional space and projecting it onto a lower-dimensional representation. We apply this method to existing photometric data from the COSMOS survey selectedmore » to approximate the anticipated Euclid weak lensing sample, enabling us to robustly map the empirical distribution of galaxies in the multidimensional color space defined by the expected Euclid filters. Mapping this multicolor distribution lets us determine where—in galaxy color space—redshifts from current spectroscopic surveys exist and where they are systematically missing. Crucially, the method lets us determine whether a spectroscopic training sample is representative of the full photometric space occupied by the galaxies in a survey. We explore optimal sampling techniques and estimate the additional spectroscopy needed to map out the color–redshift relation, finding that sampling the galaxy distribution in color space in a systematic way can efficiently meet the calibration requirements. While the analysis presented here focuses on the Euclid survey, similar analysis can be applied to other surveys facing the same calibration challenge, such as DES, LSST, and WFIRST.« less

  11. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  12. Generating Multi-Destination Maps.

    PubMed

    Zhang, Junsong; Fan, Jiepeng; Luo, Zhenshan

    2017-08-01

    Multi-destination maps are a kind of navigation maps aimed to guide visitors to multiple destinations within a region, which can be of great help to urban visitors. However, they have not been developed in the current online map service. To address this issue, we introduce a novel layout model designed especially for generating multi-destination maps, which considers the global and local layout of a multi-destination map. We model the layout problem as a graph drawing that satisfies a set of hard and soft constraints. In the global layout phase, we balance the scale factor between ROIs. In the local layout phase, we make all edges have good visibility and optimize the map layout to preserve the relative length and angle of roads. We also propose a perturbation-based optimization method to find an optimal layout in the complex solution space. The multi-destination maps generated by our system are potential feasible on the modern mobile devices and our result can show an overview and a detail view of the whole map at the same time. In addition, we perform a user study to evaluate the effectiveness of our method, and the results prove that the multi-destination maps achieve our goals well.

  13. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  14. A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy

    PubMed Central

    Wen, Hui; Xie, Weixin; Pei, Jihong

    2016-01-01

    This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737

  15. Multidimensional scaling for evolutionary algorithms--visualization of the path through search space and solution space using Sammon mapping.

    PubMed

    Pohlheim, Hartmut

    2006-01-01

    Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).

  16. Optimization of Empirical Force Fields by Parameter Space Mapping: A Single-Step Perturbation Approach.

    PubMed

    Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E

    2017-12-12

    A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.

  17. An efficient approach to the travelling salesman problem using self-organizing maps.

    PubMed

    Vieira, Frederico Carvalho; Dória Neto, Adrião Duarte; Costa, José Alfredo Ferreira

    2003-04-01

    This paper presents an approach to the well-known Travelling Salesman Problem (TSP) using Self-Organizing Maps (SOM). The SOM algorithm has interesting topological information about its neurons configuration on cartesian space, which can be used to solve optimization problems. Aspects of initialization, parameters adaptation, and complexity analysis of the proposed SOM based algorithm are discussed. The results show an average deviation of 3.7% from the optimal tour length for a set of 12 TSP instances.

  18. Reciprocal Space Mapping of Macromolecular Crystals in the Home Laboratory

    NASA Technical Reports Server (NTRS)

    Snell, Edward H.; Fewster, P. F.; Andrew, Norman; Boggon, T. J.; Judge, Russell A.; Pusey, Marc A.

    1999-01-01

    Reciprocal space mapping techniques are used widely by the materials science community to provide physical information about their crystal samples. We have used similar methods at synchrotron sources to look at the quality of macromolecular crystals produced both on the ground and under microgravity conditions. The limited nature of synchrotron time has led us to explore the use of a high resolution materials research diffractometer to perform similar measurements in the home laboratory. Although the available intensity is much reduced due to the beam conditioning necessary for high reciprocal space resolution, lower resolution data can be collected in the same detail as the synchrotron source. Experiments can be optimized at home to make most benefit from the synchrotron time available. Preliminary results including information on the mosaicity and the internal strains from reciprocal space maps will be presented.

  19. Weighted augmented Jacobian matrix with a variable coefficient method for kinematics mapping of space teleoperation based on human-robot motion similarity

    NASA Astrophysics Data System (ADS)

    Shi, Zhong; Huang, Xuexiang; Hu, Tianjian; Tan, Qian; Hou, Yuzhuo

    2016-10-01

    Space teleoperation is an important space technology, and human-robot motion similarity can improve the flexibility and intuition of space teleoperation. This paper aims to obtain an appropriate kinematics mapping method of coupled Cartesian-joint space for space teleoperation. First, the coupled Cartesian-joint similarity principles concerning kinematics differences are defined. Then, a novel weighted augmented Jacobian matrix with a variable coefficient (WAJM-VC) method for kinematics mapping is proposed. The Jacobian matrix is augmented to achieve a global similarity of human-robot motion. A clamping weighted least norm scheme is introduced to achieve local optimizations, and the operating ratio coefficient is variable to pursue similarity in the elbow joint. Similarity in Cartesian space and the property of joint constraint satisfaction is analysed to determine the damping factor and clamping velocity. Finally, a teleoperation system based on human motion capture is established, and the experimental results indicate that the proposed WAJM-VC method can improve the flexibility and intuition of space teleoperation to complete complex space tasks.

  20. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    PubMed

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  1. Accurate and Robust Unitary Transformations of a High-Dimensional Quantum System

    NASA Astrophysics Data System (ADS)

    Anderson, B. E.; Sosa-Martinez, H.; Riofrío, C. A.; Deutsch, Ivan H.; Jessen, Poul S.

    2015-06-01

    Unitary transformations are the most general input-output maps available in closed quantum systems. Good control protocols have been developed for qubits, but questions remain about the use of optimal control theory to design unitary maps in high-dimensional Hilbert spaces, and about the feasibility of their robust implementation in the laboratory. Here we design and implement unitary maps in a 16-dimensional Hilbert space associated with the 6 S1 /2 ground state of 133Cs, achieving fidelities >0.98 with built-in robustness to static and dynamic perturbations. Our work has relevance for quantum information processing and provides a template for similar advances on other physical platforms.

  2. AlphaSpace: Fragment-Centric Topographical Mapping To Target Protein–Protein Interaction Interfaces

    PubMed Central

    2016-01-01

    Inhibition of protein–protein interactions (PPIs) is emerging as a promising therapeutic strategy despite the difficulty in targeting such interfaces with drug-like small molecules. PPIs generally feature large and flat binding surfaces as compared to typical drug targets. These features pose a challenge for structural characterization of the surface using geometry-based pocket-detection methods. An attractive mapping strategy—that builds on the principles of fragment-based drug discovery (FBDD)—is to detect the fragment-centric modularity at the protein surface and then characterize the large PPI interface as a set of localized, fragment-targetable interaction regions. Here, we introduce AlphaSpace, a computational analysis tool designed for fragment-centric topographical mapping (FCTM) of PPI interfaces. Our approach uses the alpha sphere construct, a geometric feature of a protein’s Voronoi diagram, to map out concave interaction space at the protein surface. We introduce two new features—alpha-atom and alpha-space—and the concept of the alpha-atom/alpha-space pair to rank pockets for fragment-targetability and to facilitate the evaluation of pocket/fragment complementarity. The resulting high-resolution interfacial map of targetable pocket space can be used to guide the rational design and optimization of small molecule or biomimetic PPI inhibitors. PMID:26225450

  3. High-dynamic range imaging techniques based on both color-separation algorithms used in conventional graphic arts and the human visual perception modeling

    NASA Astrophysics Data System (ADS)

    Lo, Mei-Chun; Hsieh, Tsung-Hsien; Perng, Ruey-Kuen; Chen, Jiong-Qiao

    2010-01-01

    The aim of this research is to derive illuminant-independent type of HDR imaging modules which can optimally multispectrally reconstruct of every color concerned in high-dynamic-range of original images for preferable cross-media color reproduction applications. Each module, based on either of broadband and multispectral approach, would be incorporated models of perceptual HDR tone-mapping, device characterization. In this study, an xvYCC format of HDR digital camera was used to capture HDR scene images for test. A tone-mapping module was derived based on a multiscale representation of the human visual system and used equations similar to a photoreceptor adaptation equation, proposed by Michaelis-Menten. Additionally, an adaptive bilateral type of gamut mapping algorithm, using approach of a multiple conversing-points (previously derived), was incorporated with or without adaptive Un-sharp Masking (USM) to carry out the optimization of HDR image rendering. An LCD with standard color space of Adobe RGB (D65) was used as a soft-proofing platform to display/represent HDR original RGB images, and also evaluate both renditionquality and prediction-performance of modules derived. Also, another LCD with standard color space of sRGB was used to test gamut-mapping algorithms, used to be integrated with tone-mapping module derived.

  4. An object correlation and maneuver detection approach for space surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Hu, Wei-Dong; Xin, Qin; Du, Xiao-Yong

    2012-10-01

    Object correlation and maneuver detection are persistent problems in space surveillance and maintenance of a space object catalog. We integrate these two problems into one interrelated problem, and consider them simultaneously under a scenario where space objects only perform a single in-track orbital maneuver during the time intervals between observations. We mathematically formulate this integrated scenario as a maximum a posteriori (MAP) estimation. In this work, we propose a novel approach to solve the MAP estimation. More precisely, the corresponding posterior probability of an orbital maneuver and a joint association event can be approximated by the Joint Probabilistic Data Association (JPDA) algorithm. Subsequently, the maneuvering parameters are estimated by optimally solving the constrained non-linear least squares iterative process based on the second-order cone programming (SOCP) algorithm. The desired solution is derived according to the MAP criterions. The performance and advantages of the proposed approach have been shown by both theoretical analysis and simulation results. We hope that our work will stimulate future work on space surveillance and maintenance of a space object catalog.

  5. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  6. Optimally cloned binary coherent states

    NASA Astrophysics Data System (ADS)

    Müller, C. R.; Leuchs, G.; Marquardt, Ch.; Andersen, U. L.

    2017-10-01

    Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal cloner.

  7. Three-dimensional desirability spaces for quality-by-design-based HPLC development.

    PubMed

    Mokhtar, Hatem I; Abdel-Salam, Randa A; Hadad, Ghada M

    2015-04-01

    In this study, three-dimensional desirability spaces were introduced as a graphical representation method of design space. This was illustrated in the context of application of quality-by-design concepts on development of a stability indicating gradient reversed-phase high-performance liquid chromatography method for the determination of vinpocetine and α-tocopheryl acetate in a capsule dosage form. A mechanistic retention model to optimize gradient time, initial organic solvent concentration and ternary solvent ratio was constructed for each compound from six experimental runs. Then, desirability function of each optimized criterion and subsequently the global desirability function were calculated throughout the knowledge space. The three-dimensional desirability spaces were plotted as zones exceeding a threshold value of desirability index in space defined by the three optimized method parameters. Probabilistic mapping of desirability index aided selection of design space within the potential desirability subspaces. Three-dimensional desirability spaces offered better visualization and potential design spaces for the method as a function of three method parameters with ability to assign priorities to this critical quality as compared with the corresponding resolution spaces. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  9. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  10. Terrain Dynamics Analysis Using Space-Time Domain Hypersurfaces and Gradient Trajectories Derived From Time Series of 3D Point Clouds

    DTIC Science & Technology

    2015-08-01

    optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on

  11. Cartesian control of redundant robots

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1989-01-01

    A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.

  12. An Integrated Framework for Parameter-based Optimization of Scientific Workflows.

    PubMed

    Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.

  13. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  14. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    PubMed

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  16. Depth image super-resolution via semi self-taught learning framework

    NASA Astrophysics Data System (ADS)

    Zhao, Furong; Cao, Zhiguo; Xiao, Yang; Zhang, Xiaodi; Xian, Ke; Li, Ruibo

    2017-06-01

    Depth images have recently attracted much attention in computer vision and high-quality 3D content for 3DTV and 3D movies. In this paper, we present a new semi self-taught learning application framework for enhancing resolution of depth maps without making use of ancillary color images data at the target resolution, or multiple aligned depth maps. Our framework consists of cascade random forests reaching from coarse to fine results. We learn the surface information and structure transformations both from a small high-quality depth exemplars and the input depth map itself across different scales. Considering that edge plays an important role in depth map quality, we optimize an effective regularized objective that calculates on output image space and input edge space in random forests. Experiments show the effectiveness and superiority of our method against other techniques with or without applying aligned RGB information

  17. A Locally Optimal Algorithm for Estimating a Generating Partition from an Observed Time Series and Its Application to Anomaly Detection.

    PubMed

    Ghalyan, Najah F; Miller, David J; Ray, Asok

    2018-06-12

    Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.

  18. Current trends in satellite based emergency mapping - the need for harmonisation

    NASA Astrophysics Data System (ADS)

    Voigt, Stefan

    2013-04-01

    During the past years, the availability and use of satellite image data to support disaster management and humanitarian relief organisations has largely increased. The automation and data processing techniques are greatly improving as well as the capacity in accessing and processing satellite imagery in getting better globally. More and more global activities via the internet and through global organisations like the United Nations or the International Charter Space and Major Disaster engage in the topic, while at the same time, more and more national or local centres engage rapid mapping operations and activities. In order to make even more effective use of this very positive increase of capacity, for the sake of operational provision of analysis results, for fast validation of satellite derived damage assessments, for better cooperation in the joint inter agency generation of rapid mapping products and for general scientific use, rapid mapping results in general need to be better harmonized, if not even standardized. In this presentation, experiences from various years of rapid mapping gained by the DLR Center for satellite based Crisis Information (ZKI) within the context of the national activities, the International Charter Space and Major Disasters, GMES/Copernicus etc. are reported. Furthermore, an overview on how automation, quality assurance and optimization can be achieved through standard operation procedures within a rapid mapping workflow is given. Building on this long term rapid mapping experience, and building on the DLR initiative to set in pace an "International Working Group on Satellite Based Emergency Mapping" current trends in rapid mapping are discussed and thoughts on how the sharing of rapid mapping information can be optimized by harmonizing analysis results and data structures are presented. Such an harmonization of analysis procedures, nomenclatures and representations of data as well as meta data are the basis to better cooperate within the global rapid mapping community throughout local/national, regional/supranational and global scales

  19. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  20. A Transformation Approach to Optimal Control Problems with Bounded State Variables

    NASA Technical Reports Server (NTRS)

    Hanafy, Lawrence Hanafy

    1971-01-01

    A technique is described and utilized in the study of the solutions to various general problems in optimal control theory, which are converted in to Lagrange problems in the calculus of variations. This is accomplished by mapping certain properties in Euclidean space onto closed control and state regions. Nonlinear control problems with a unit m cube as control region and unit n cube as state region are considered.

  1. The Joint Milli-Arcsecond Pathfinder Survey (J-MAPS) Mission: Application for Space Situational Awareness

    DTIC Science & Technology

    2008-09-01

    One implication of this is that the instrument can physically resolve satellites at smaller separations than current and existing optical SSA assets...with the potential for 24/7 taskability and near-real time capability. By optimizing an instrument to perform position measurement rather than...sensors. The J-MAPS baseline also includes a novel filter-grating wheel, of interest in the area of non- resolved object characterization. We discuss the

  2. State Space Modeling of Time-Varying Contemporaneous and Lagged Relations in Connectivity Maps

    PubMed Central

    Molenaar, Peter C. M.; Beltz, Adriene M.; Gates, Kathleen M.; Wilson, Stephen J.

    2017-01-01

    Most connectivity mapping techniques for neuroimaging data assume stationarity (i.e., network parameters are constant across time), but this assumption does not always hold true. The authors provide a description of a new approach for simultaneously detecting time-varying (or dynamic) contemporaneous and lagged relations in brain connectivity maps. Specifically, they use a novel raw data likelihood estimation technique (involving a second-order extended Kalman filter/smoother embedded in a nonlinear optimizer) to determine the variances of the random walks associated with state space model parameters and their autoregressive components. The authors illustrate their approach with simulated and blood oxygen level-dependent functional magnetic resonance imaging data from 30 daily cigarette smokers performing a verbal working memory task, focusing on seven regions of interest (ROIs). Twelve participants had dynamic directed functional connectivity maps: Eleven had one or more time-varying contemporaneous ROI state loadings, and one had a time-varying autoregressive parameter. Compared to smokers without dynamic maps, smokers with dynamic maps performed the task with greater accuracy. Thus, accurate detection of dynamic brain processes is meaningfully related to behavior in a clinical sample. PMID:26546863

  3. Mapping of Drug-like Chemical Universe with Reduced Complexity Molecular Frameworks.

    PubMed

    Kontijevskis, Aleksejs

    2017-04-24

    The emergence of the DNA-encoded chemical libraries (DEL) field in the past decade has attracted the attention of the pharmaceutical industry as a powerful mechanism for the discovery of novel drug-like hits for various biological targets. Nuevolution Chemetics technology enables DNA-encoded synthesis of billions of chemically diverse drug-like small molecule compounds, and the efficient screening and optimization of these, facilitating effective identification of drug candidates at an unprecedented speed and scale. Although many approaches have been developed by the cheminformatics community for the analysis and visualization of drug-like chemical space, most of them are restricted to the analysis of a maximum of a few millions of compounds and cannot handle collections of 10 8 -10 12 compounds typical for DELs. To address this big chemical data challenge, we developed the Reduced Complexity Molecular Frameworks (RCMF) methodology as an abstract and very general way of representing chemical structures. By further introducing RCMF descriptors, we constructed a global framework map of drug-like chemical space and demonstrated how chemical space occupied by multi-million-member drug-like Chemetics DNA-encoded libraries and virtual combinatorial libraries with >10 12 members could be analyzed and mapped without a need for library enumeration. We further validate the approach by performing RCMF-based searches in a drug-like chemical universe and mapping Chemetics library selection outputs for LSD1 targets on a global framework chemical space map.

  4. ADME-Space: a new tool for medicinal chemists to explore ADME properties.

    PubMed

    Bocci, Giovanni; Carosati, Emanuele; Vayer, Philippe; Arrault, Alban; Lozano, Sylvain; Cruciani, Gabriele

    2017-07-25

    We introduce a new chemical space for drugs and drug-like molecules, exclusively based on their in silico ADME behaviour. This ADME-Space is based on self-organizing map (SOM) applied to 26,000 molecules. Twenty accurate QSPR models, describing important ADME properties, were developed and, successively, used as new molecular descriptors not related to molecular structure. Applications include permeability, active transport, metabolism and bioavailability studies, but the method can be even used to discuss drug-drug interactions (DDIs) or it can be extended to additional ADME properties. Thus, the ADME-Space opens a new framework for the multi-parametric data analysis in drug discovery where all ADME behaviours of molecules are condensed in one map: it allows medicinal chemists to simultaneously monitor several ADME properties, to rapidly select optimal ADME profiles, retrieve warning on potential ADME problems and DDIs or select proper in vitro experiments.

  5. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    USGS Publications Warehouse

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  6. Shared protection based virtual network mapping in space division multiplexing optical networks

    NASA Astrophysics Data System (ADS)

    Zhang, Huibin; Wang, Wei; Zhao, Yongli; Zhang, Jie

    2018-05-01

    Space Division Multiplexing (SDM) has been introduced to improve the capacity of optical networks. In SDM optical networks, there are multiple cores/modes in each fiber link, and spectrum resources are multiplexed in both frequency and core/modes dimensions. Enabled by network virtualization technology, one SDM optical network substrate can be shared by several virtual networks operators. Similar with point-to-point connection services, virtual networks (VN) also need certain survivability to guard against network failures. Based on customers' heterogeneous requirements on the survivability of their virtual networks, this paper studies the shared protection based VN mapping problem and proposes a Minimum Free Frequency Slots (MFFS) mapping algorithm to improve spectrum efficiency. Simulation results show that the proposed algorithm can optimize SDM optical networks significantly in terms of blocking probability and spectrum utilization.

  7. Surrogate based wind farm layout optimization using manifold mapping

    NASA Astrophysics Data System (ADS)

    Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester

    2016-09-01

    High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.

  8. An Automated Pipeline for Engineering Many-Enzyme Pathways: Computational Sequence Design, Pathway Expression-Flux Mapping, and Scalable Pathway Optimization.

    PubMed

    Halper, Sean M; Cetnar, Daniel P; Salis, Howard M

    2018-01-01

    Engineering many-enzyme metabolic pathways suffers from the design curse of dimensionality. There are an astronomical number of synonymous DNA sequence choices, though relatively few will express an evolutionary robust, maximally productive pathway without metabolic bottlenecks. To solve this challenge, we have developed an integrated, automated computational-experimental pipeline that identifies a pathway's optimal DNA sequence without high-throughput screening or many cycles of design-build-test. The first step applies our Operon Calculator algorithm to design a host-specific evolutionary robust bacterial operon sequence with maximally tunable enzyme expression levels. The second step applies our RBS Library Calculator algorithm to systematically vary enzyme expression levels with the smallest-sized library. After characterizing a small number of constructed pathway variants, measurements are supplied to our Pathway Map Calculator algorithm, which then parameterizes a kinetic metabolic model that ultimately predicts the pathway's optimal enzyme expression levels and DNA sequences. Altogether, our algorithms provide the ability to efficiently map the pathway's sequence-expression-activity space and predict DNA sequences with desired metabolic fluxes. Here, we provide a step-by-step guide to applying the Pathway Optimization Pipeline on a desired multi-enzyme pathway in a bacterial host.

  9. The relationship of acquisition systems to automated stereo correlation.

    USGS Publications Warehouse

    Colvocoresses, A.P.

    1983-01-01

    Today a concerted effort is being made to expedite the mapping process through automated correlation of stereo data. Stereo correlation involves the comparison of radiance (brightness) signals or patterns recorded by sensors. Conventionally, two-dimensional area correlation is utilized but this is a rather slow and cumbersome procedure. Digital correlation can be performed in only one dimension where suitable signal patterns exist, and the one-dimensional mode is much faster. Electro-optical (EO) systems, suitable for space use, also have much greater flexibility than film systems. Thus, an EO space system can be designed which will optimize one-dimensional stereo correlation and lead toward the automation of topographic mapping.-from Author

  10. Drawing road networks with focus regions.

    PubMed

    Haunert, Jan-Henrik; Sering, Leon

    2011-12-01

    Mobile users of maps typically need detailed information about their surroundings plus some context information about remote places. In order to avoid that the map partly gets too dense, cartographers have designed mapping functions that enlarge a user-defined focus region--such functions are sometimes called fish-eye projections. The extra map space occupied by the enlarged focus region is compensated by distorting other parts of the map. We argue that, in a map showing a network of roads relevant to the user, distortion should preferably take place in those areas where the network is sparse. Therefore, we do not apply a predefined mapping function. Instead, we consider the road network as a graph whose edges are the road segments. We compute a new spatial mapping with a graph-based optimization approach, minimizing the square sum of distortions at edges. Our optimization method is based on a convex quadratic program (CQP); CQPs can be solved in polynomial time. Important requirements on the output map are expressed as linear inequalities. In particular, we show how to forbid edge crossings. We have implemented our method in a prototype tool. For instances of different sizes, our method generated output maps that were far less distorted than those generated with a predefined fish-eye projection. Future work is needed to automate the selection of roads relevant to the user. Furthermore, we aim at fast heuristics for application in real-time systems. © 2011 IEEE

  11. Cloud GPU-based simulations for SQUAREMR.

    PubMed

    Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H

    2017-01-01

    Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Low-energy Lunar Trajectories with Lunar Flybys

    NASA Astrophysics Data System (ADS)

    Wei, B. W.; Li, Y. S.

    2017-09-01

    The low-energy lunar trajectories with lunar flybys are investigated in the Sun-Earth-Moon bicircular problem (BCP). Accordingly, the characteristics of the distribution of trajectories in the phase space are summarized. To begin with, by using invariant manifolds of the BCP system, the low-energy lunar trajectories with lunar flybys are sought based on the BCP model. Secondly, through the treating time as an augmented dimension in the phase space of nonautonomous system, the state space map that reveals the distribution of these lunar trajectories in the phase space is given. As a result, it is become clear that low-energy lunar trajectories exist in families, and every moment of a Sun-Earth-Moon synodic period can be the departure date. Finally, the changing rule of departure impulse, midcourse impulse at Poincaré section, transfer duration, and system energy of different families are analyzed. Consequently, the impulse optimal family and transfer duration optimal family are obtained respectively.

  13. State space modeling of time-varying contemporaneous and lagged relations in connectivity maps.

    PubMed

    Molenaar, Peter C M; Beltz, Adriene M; Gates, Kathleen M; Wilson, Stephen J

    2016-01-15

    Most connectivity mapping techniques for neuroimaging data assume stationarity (i.e., network parameters are constant across time), but this assumption does not always hold true. The authors provide a description of a new approach for simultaneously detecting time-varying (or dynamic) contemporaneous and lagged relations in brain connectivity maps. Specifically, they use a novel raw data likelihood estimation technique (involving a second-order extended Kalman filter/smoother embedded in a nonlinear optimizer) to determine the variances of the random walks associated with state space model parameters and their autoregressive components. The authors illustrate their approach with simulated and blood oxygen level-dependent functional magnetic resonance imaging data from 30 daily cigarette smokers performing a verbal working memory task, focusing on seven regions of interest (ROIs). Twelve participants had dynamic directed functional connectivity maps: Eleven had one or more time-varying contemporaneous ROI state loadings, and one had a time-varying autoregressive parameter. Compared to smokers without dynamic maps, smokers with dynamic maps performed the task with greater accuracy. Thus, accurate detection of dynamic brain processes is meaningfully related to behavior in a clinical sample. Published by Elsevier Inc.

  14. Distortion correction of echo planar images applying the concept of finite rate of innovation to point spread function mapping (FRIP).

    PubMed

    Nunes, Rita G; Hajnal, Joseph V

    2018-06-01

    Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.

  15. Application of Metaheuristic and Deterministic Algorithms for Aircraft Reference Trajectory Optimization =

    NASA Astrophysics Data System (ADS)

    Murrieta Mendoza, Alejandro

    Aircraft reference trajectory is an alternative method to reduce fuel consumption, thus the pollution released to the atmosphere. Fuel consumption reduction is of special importance for two reasons: first, because the aeronautical industry is responsible of 2% of the CO2 released to the atmosphere, and second, because it will reduce the flight cost. The aircraft fuel model was obtained from a numerical performance database which was created and validated by our industrial partner from flight experimental test data. A new methodology using the numerical database was proposed in this thesis to compute the fuel burn for a given trajectory. Weather parameters such as wind and temperature were taken into account as they have an important effect in fuel burn. The open source model used to obtain the weather forecast was provided by Weather Canada. A combination of linear and bi-linear interpolations allowed finding the required weather data. The search space was modelled using different graphs: one graph was used for mapping the different flight phases such as climb, cruise and descent, and another graph was used for mapping the physical space in which the aircraft would perform its flight. The trajectory was optimized in its vertical reference trajectory using the Beam Search algorithm, and a combination of the Beam Search algorithm with a search space reduction technique. The trajectory was optimized simultaneously for the vertical and lateral reference navigation plans while fulfilling a Required Time of Arrival constraint using three different metaheuristic algorithms: the artificial bee's colony, and the ant colony optimization. Results were validated using the software FlightSIMRTM, a commercial Flight Management System, an exhaustive search algorithm, and as flown flights obtained from flightawareRTM. All algorithms were able to reduce the fuel burn, and the flight costs. None None None None None None None

  16. A fast optimization approach for treatment planning of volumetric modulated arc therapy.

    PubMed

    Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong

    2018-05-30

    Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.

  17. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  18. Low-discrepancy sampling of parametric surface using adaptive space-filling curves (SFC)

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Szu, Harold

    2014-05-01

    Space-Filling Curves (SFCs) are encountered in different fields of engineering and computer science, especially where it is important to linearize multidimensional data for effective and robust interpretation of the information. Examples of multidimensional data are matrices, images, tables, computational grids, and Electroencephalography (EEG) sensor data resulting from the discretization of partial differential equations (PDEs). Data operations like matrix multiplications, load/store operations and updating and partitioning of data sets can be simplified when we choose an efficient way of going through the data. In many applications SFCs present just this optimal manner of mapping multidimensional data onto a one dimensional sequence. In this report, we begin with an example of a space-filling curve and demonstrate how it can be used to find the most similarity using Fast Fourier transform (FFT) through a set of points. Next we give a general introduction to space-filling curves and discuss properties of them. Finally, we consider a discrete version of space-filling curves and present experimental results on discrete space-filling curves optimized for special tasks.

  19. Numerical integration and optimization of motions for multibody dynamic systems

    NASA Astrophysics Data System (ADS)

    Aguilar Mayans, Joan

    This thesis considers the optimization and simulation of motions involving rigid body systems. It does so in three distinct parts, with the following topics: optimization and analysis of human high-diving motions, efficient numerical integration of rigid body dynamics with contacts, and motion optimization of a two-link robot arm using Finite-Time Lyapunov Analysis. The first part introduces the concept of eigenpostures, which we use to simulate and analyze human high-diving motions. Eigenpostures are used in two different ways: first, to reduce the complexity of the optimal control problem that we solve to obtain such motions, and second, to generate an eigenposture space to which we map existing real world motions to better analyze them. The benefits of using eigenpostures are showcased through different examples. The second part reviews an extensive list of integration algorithms used for the integration of rigid body dynamics. We analyze the accuracy and stability of the different integrators in the three-dimensional space and the rotation space SO(3). Integrators with an accuracy higher than first order perform more efficiently than integrators with first order accuracy, even in the presence of contacts. The third part uses Finite-time Lyapunov Analysis to optimize motions for a two-link robot arm. Finite-Time Lyapunov Analysis diagnoses the presence of time-scale separation in the dynamics of the optimized motion and provides the information and methodology for obtaining an accurate approximation to the optimal solution, avoiding the complications that timescale separation causes for alternative solution methods.

  20. Planning Paths Through Singularities in the Center of Mass Space

    NASA Technical Reports Server (NTRS)

    Doggett, William R.; Messner, William C.; Juang, Jer-Nan

    1998-01-01

    The center of mass space is a convenient space for planning motions that minimize reaction forces at the robot's base or optimize the stability of a mechanism. A unique problem associated with path planning in the center of mass space is the potential existence of multiple center of mass images for a single Cartesian obstacle, since a single center of mass location can correspond to multiple robot joint configurations. The existence of multiple images results in a need to either maintain multiple center of mass obstacle maps or to update obstacle locations when the robot passes through a singularity, such as when it moves from an elbow-up to an elbow-down configuration. To illustrate the concepts presented in this paper, a path is planned for an example task requiring motion through multiple center of mass space maps. The object of the path planning algorithm is to locate the bang- bang acceleration profile that minimizes the robot's base reactions in the presence of a single Cartesian obstacle. To simplify the presentation, only non-redundant robots are considered and joint non-linearities are neglected.

  1. How do ensembles occupy space?

    NASA Astrophysics Data System (ADS)

    Daffertshofer, A.

    2008-04-01

    To find an answer to the title question, an attractiveness function between agents and locations is introduced yielding a phenomenological but generic model for the search for optimal distributions of agents over space. Agents can be seen as, e.g., members of biological populations like colonies of bacteria, swarms, and so on. The global attractiveness between agents and locations is maximized causing (self-propelled) `motion' of agents and, eventually, distinct distributions of agents over space. At the same token spontaneous changes or `decisions' are realized via competitions between agents as well as between locations. Hence, the model's solutions can be considered a sequence of decisions of agents during their search for a proper location. Depending on initial conditions both optimal as well as suboptimal configurations can be reached. For the latter early decision-making are important for avoiding possible conflicts: if the proper moment is missed, then only a few agents can find an optimal solution. Indeed, there is a delicate interplay between the values of the attractiveness function and the constraints as can be expressed by distinct terms of a potential function containing different Lagrange parameters. The model should be viewed as a top-down approach as it describes the dynamics of order parameters, i.e. macroscopic variables that reflect affiliations between agents and locations. The dynamics, however, is modified via so-called cost functions that are interpreted in terms of affinity levels. This interpretation can be seen as an original step towards an understanding of the dynamics at the underlying microscopic level. When focusing on the agent, one may say that the dynamics of an order parameter shows the evolution of an agent's intrinsic `map' for solving the problem of space occupation. Importantly, the dynamics does not necessarily distinguish between evolving (or moving) agents and evolving (or moving) locations though agents are more likely to be actors than the locations. Put differently, an order parameter describes an internal map which is linked to the expectation of an agent to find a certain location. Owing to the dynamical representation, we can therefore follow up the change of these maps over time leading from uncertainty to certainty.

  2. Fuel-Optimal Trajectories in a Planet-Moon Environment Using Multiple Gravity Assists

    NASA Technical Reports Server (NTRS)

    Ross, Shane D.; Grover, Piyush

    2007-01-01

    For low energy spacecraft trajectories such as multi-moon orbiters for the Jupiter system, multiple gravity assists by moons could be used in conjunction with ballistic capture to drastically decrease fuel usage. In this paper, we outline a procedure to obtain a family of zero-fuel multi-moon orbiter trajectories, using a family of Keplerian maps derived by the first author previously. The maps capture well the dynamics of the full equations of motion; the phase space contains a connected chaotic zone where intersections between unstable resonant orbit manifolds provide the template for lanes of fast migration between orbits of different semimajor axes. Patched three body approach is used and the four body problem is broken down into two three-body problems, and the search space is considerably reduced by the use of properties of the Keplerian maps. We also introduce the notion of Switching Region where the perturbations due to the two perturbing moons are of comparable strength, and which separates the domains of applicability of the corresponding two Keplerian maps.

  3. MAIN software for density averaging, model building, structure refinement and validation

    PubMed Central

    Turk, Dušan

    2013-01-01

    MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458

  4. Attractors in Sequence Space: Agent-Based Exploration of MHC I Binding Peptides.

    PubMed

    Jäger, Natalie; Wisniewska, Joanna M; Hiss, Jan A; Freier, Anja; Losch, Florian O; Walden, Peter; Wrede, Paul; Schneider, Gisbert

    2010-01-12

    Ant Colony Optimization (ACO) is a meta-heuristic that utilizes a computational analogue of ant trail pheromones to solve combinatorial optimization problems. The size of the ant colony and the representation of the ants' pheromone trails is unique referring to the given optimization problem. In the present study, we employed ACO to generate novel peptides that stabilize MHC I protein on the plasma membrane of a murine lymphoma cell line. A jury of feedforward neural network classifiers served as fitness function for peptide design by ACO. Bioactive murine MHC I H-2K(b) stabilizing as well as nonstabilizing octapeptides were designed, synthesized and tested. These peptides reveal residue motifs that are relevant for MHC I receptor binding. We demonstrate how the performance of the implemented ACO algorithm depends on the colony size and the size of the search space. The actual peptide design process by ACO constitutes a search path in sequence space that can be visualized as trajectories on a self-organizing map (SOM). By projecting the sequence space on a SOM we visualize the convergence of the different solutions that emerge during the optimization process in sequence space. The SOM representation reveals attractors in sequence space for MHC I binding peptides. The combination of ACO and SOM enables systematic peptide optimization. This technique allows for the rational design of various types of bioactive peptides with minimal experimental effort. Here, we demonstrate its successful application to the design of MHC-I binding and nonbinding peptides which exhibit substantial bioactivity in a cell-based assay. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Two fast approximate wavelet algorithms for image processing, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Wickerhauser, Mladen V.

    1994-07-01

    We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.

  6. The infinum principle

    NASA Technical Reports Server (NTRS)

    Geering, H. P.; Athans, M.

    1973-01-01

    A complete theory of necessary and sufficient conditions is discussed for a control to be superior with respect to a nonscalar-valued performance criterion. The latter maps into a finite dimensional, integrally closed directed, partially ordered linear space. The applicability of the theory to the analysis of dynamic vector estimation problems and to a class of uncertain optimal control problems is demonstrated.

  7. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  8. Direct demodulation method for heavy atom position determination in protein crystallography

    NASA Astrophysics Data System (ADS)

    Zhou, Liang; Liu, Zhong-Chuan; Liu, Peng; Dong, Yu-Hui

    2013-01-01

    The first step of phasing in any de novo protein structure determination using isomorphous replacement (IR) or anomalous scattering (AD) experiments is to find heavy atom positions. Traditionally, heavy atom positions can be solved by inspecting the difference Patterson maps. Due to the weak signals in isomorphous or anomalous differences and the noisy background in the Patterson map, the search for heavy atoms may become difficult. Here, the direct demodulation (DD) method is applied to the difference Patterson maps to reduce the noisy backgrounds and sharpen the signal peaks. The real space Patterson search by using these optimized maps can locate the heavy atom positions more accurately. It is anticipated that the direct demodulation method can assist in heavy atom position determination and facilitate the de novo structure determination of proteins.

  9. On the use of ANN interconnection weights in optimal structural design

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Szewczyk, Z.

    1992-01-01

    The present paper describes the use of interconnection weights of a multilayer, feedforward network, to extract information pertinent to the mapping space that the network is assumed to represent. In particular, these weights can be used to determine an appropriate network architecture, and an adequate number of training patterns (input-output pairs) have been used for network training. The weight analysis also provides an approach to assess the influence of each input parameter on a selected output component. The paper shows the significance of this information in decomposition driven optimal design.

  10. Gradient-based adaptation of general gaussian kernels.

    PubMed

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  11. Distributed Fast Self-Organized Maps for Massive Spectrophotometric Data Analysis †.

    PubMed

    Dafonte, Carlos; Garabato, Daniel; Álvarez, Marco A; Manteiga, Minia

    2018-05-03

    Analyzing huge amounts of data becomes essential in the era of Big Data, where databases are populated with hundreds of Gigabytes that must be processed to extract knowledge. Hence, classical algorithms must be adapted towards distributed computing methodologies that leverage the underlying computational power of these platforms. Here, a parallel, scalable, and optimized design for self-organized maps (SOM) is proposed in order to analyze massive data gathered by the spectrophotometric sensor of the European Space Agency (ESA) Gaia spacecraft, although it could be extrapolated to other domains. The performance comparison between the sequential implementation and the distributed ones based on Apache Hadoop and Apache Spark is an important part of the work, as well as the detailed analysis of the proposed optimizations. Finally, a domain-specific visualization tool to explore astronomical SOMs is presented.

  12. Space Weather Activities of IONOLAB Group: TEC Mapping

    NASA Astrophysics Data System (ADS)

    Arikan, F.; Yilmaz, A.; Arikan, O.; Sayin, I.; Gurun, M.; Akdogan, K. E.; Yildirim, S. A.

    2009-04-01

    Being a key player in Space Weather, ionospheric variability affects the performance of both communication and navigation systems. To improve the performance of these systems, ionosphere has to be monitored. Total Electron Content (TEC), line integral of the electron density along a ray path, is an important parameter to investigate the ionospheric variability. A cost-effective way of obtaining TEC is by using dual-frequency GPS receivers. Since these measurements are sparse in space, accurate and robust interpolation techniques are needed to interpolate (or map) the TEC distribution for a given region in space. However, the TEC data derived from GPS measurements contain measurement noise, model and computational errors. Thus, it is necessary to analyze the interpolation performance of the techniques on synthetic data sets that can represent various ionospheric states. By this way, interpolation performance of the techniques can be compared over many parameters that can be controlled to represent the desired ionospheric states. In this study, Multiquadrics, Inverse Distance Weighting (IDW), Cubic Splines, Ordinary and Universal Kriging, Random Field Priors (RFP), Multi-Layer Perceptron Neural Network (MLP-NN), and Radial Basis Function Neural Network (RBF-NN) are employed as the spatial interpolation algorithms. These mapping techniques are initially tried on synthetic TEC surfaces for parameter and coefficient optimization and determination of error bounds. Interpolation performance of these methods are compared on synthetic TEC surfaces over the parameters of sampling pattern, number of samples, the variability of the surface and the trend type in the TEC surfaces. By examining the performance of the interpolation methods, it is observed that both Kriging, RFP and NN have important advantages and possible disadvantages depending on the given constraints. It is also observed that the determining parameter in the error performance is the trend in the Ionosphere. Optimization of the algorithms in terms of their performance parameters (like the choice of the semivariogram function for Kriging algorithms and the hidden layer and neuron numbers for MLP-NN) mostly depend on the behavior of the ionosphere at that given time instant for the desired region. The sampling pattern and number of samples are the other important parameters that may contribute to the higher errors in reconstruction. For example, for all of the above listed algorithms, hexagonal regular sampling of the ionosphere provides the lowest reconstruction error and the performance significantly degrades as the samples in the region become sparse and clustered. The optimized models and coefficients are applied to regional GPS-TEC mapping using the IONOLAB-TEC data (www.ionolab.org). Both Kriging combined with Kalman Filter and dynamic modeling of NN are also implemented as first trials of TEC and space weather predictions.

  13. Sensor-Motor Maps for Describing Linear Reflex Composition in Hopping.

    PubMed

    Schumacher, Christian; Seyfarth, André

    2017-01-01

    In human and animal motor control several sensory organs contribute to a network of sensory pathways modulating the motion depending on the task and the phase of execution to generate daily motor tasks such as locomotion. To better understand the individual and joint contribution of reflex pathways in locomotor tasks, we developed a neuromuscular model that describes hopping movements. In this model, we consider the influence of proprioceptive length (LFB), velocity (VFB) and force feedback (FFB) pathways of a leg extensor muscle on hopping stability, performance and efficiency (metabolic effort). Therefore, we explore the space describing the blending of the monosynaptic reflex pathway gains. We call this reflex parameter space a sensor-motor map . The sensor-motor maps are used to visualize the functional contribution of sensory pathways in multisensory integration. We further evaluate the robustness of these sensor-motor maps to changes in tendon elasticity, body mass, segment length and ground compliance. The model predicted that different reflex pathway compositions selectively optimize specific hopping characteristics (e.g., performance and efficiency). Both FFB and LFB were pathways that enable hopping. FFB resulted in the largest hopping heights, LFB enhanced hopping efficiency and VFB had the ability to disable hopping. For the tested case, the topology of the sensor-motor maps as well as the location of functionally optimal compositions were invariant to changes in system designs (tendon elasticity, body mass, segment length) or environmental parameters (ground compliance). Our results indicate that different feedback pathway compositions may serve different functional roles. The topology of the sensor-motor map was predicted to be robust against changes in the mechanical system design indicating that the reflex system can use different morphological designs, which does not apply for most robotic systems (for which the control often follows a specific design). Consequently, variations in body mechanics are permitted with consistent compositions of sensory feedback pathways. Given the variability in human body morphology, such variations are highly relevant for human motor control.

  14. Euclid Mission: Mapping the Geometry of the Dark Universe. Mission and Consortium Status

    NASA Technical Reports Server (NTRS)

    Rhodes, Jason

    2011-01-01

    Euclid concept: (1) High-precision survey mission to map the geometry of the Dark Universe (2) Optimized for two complementary cosmological probes: (2a) Weak Gravitational Lensing (2b) Baryonic Acoustic Oscillations (2c) Additional probes: clusters, redshift space distortions, ISW (3) Full extragalactic sky survey with 1.2m telescope at L2: (3a) Imaging: (3a-1) High precision imaging at visible wavelengths (3a-2) Photometry/Imaging in the near-infrared (3b) Near Infrared Spectroscopy (4) Synergy with ground based surveys (5) Legacy science for a wide range of in astronomy

  15. Brain templates and atlases.

    PubMed

    Evans, Alan C; Janke, Andrew L; Collins, D Louis; Baillet, Sylvain

    2012-08-15

    The core concept within the field of brain mapping is the use of a standardized, or "stereotaxic", 3D coordinate frame for data analysis and reporting of findings from neuroimaging experiments. This simple construct allows brain researchers to combine data from many subjects such that group-averaged signals, be they structural or functional, can be detected above the background noise that would swamp subtle signals from any single subject. Where the signal is robust enough to be detected in individuals, it allows for the exploration of inter-individual variance in the location of that signal. From a larger perspective, it provides a powerful medium for comparison and/or combination of brain mapping findings from different imaging modalities and laboratories around the world. Finally, it provides a framework for the creation of large-scale neuroimaging databases or "atlases" that capture the population mean and variance in anatomical or physiological metrics as a function of age or disease. However, while the above benefits are not in question at first order, there are a number of conceptual and practical challenges that introduce second-order incompatibilities among experimental data. Stereotaxic mapping requires two basic components: (i) the specification of the 3D stereotaxic coordinate space, and (ii) a mapping function that transforms a 3D brain image from "native" space, i.e. the coordinate frame of the scanner at data acquisition, to that stereotaxic space. The first component is usually expressed by the choice of a representative 3D MR image that serves as target "template" or atlas. The native image is re-sampled from native to stereotaxic space under the mapping function that may have few or many degrees of freedom, depending upon the experimental design. The optimal choice of atlas template and mapping function depend upon considerations of age, gender, hemispheric asymmetry, anatomical correspondence, spatial normalization methodology and disease-specificity. Accounting, or not, for these various factors in defining stereotaxic space has created the specter of an ever-expanding set of atlases, customized for a particular experiment, that are mutually incompatible. These difficulties continue to plague the brain mapping field. This review article summarizes the evolution of stereotaxic space in term of the basic principles and associated conceptual challenges, the creation of population atlases and the future trends that can be expected in atlas evolution. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its clustering in 2D.

  17. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  18. Optimality in mono- and multisensory map formation.

    PubMed

    Bürck, Moritz; Friedel, Paul; Sichert, Andreas B; Vossen, Christine; van Hemmen, J Leo

    2010-07-01

    In the struggle for survival in a complex and dynamic environment, nature has developed a multitude of sophisticated sensory systems. In order to exploit the information provided by these sensory systems, higher vertebrates reconstruct the spatio-temporal environment from each of the sensory systems they have at their disposal. That is, for each modality the animal computes a neuronal representation of the outside world, a monosensory neuronal map. Here we present a universal framework that allows to calculate the specific layout of the involved neuronal network by means of a general mathematical principle, viz., stochastic optimality. In order to illustrate the use of this theoretical framework, we provide a step-by-step tutorial of how to apply our model. In so doing, we present a spatial and a temporal example of optimal stimulus reconstruction which underline the advantages of our approach. That is, given a known physical signal transmission and rudimental knowledge of the detection process, our approach allows to estimate the possible performance and to predict neuronal properties of biological sensory systems. Finally, information from different sensory modalities has to be integrated so as to gain a unified perception of reality for further processing, e.g., for distinct motor commands. We briefly discuss concepts of multimodal interaction and how a multimodal space can evolve by alignment of monosensory maps.

  19. Surrogate-Based Optimization of Biogeochemical Transport Models

    NASA Astrophysics Data System (ADS)

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  20. Determination of the optimized single-layer ionospheric height for electron content measurements over China

    NASA Astrophysics Data System (ADS)

    Li, Min; Yuan, Yunbin; Zhang, Baocheng; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao

    2018-02-01

    The ionosphere effective height (IEH) is a very important parameter in total electron content (TEC) measurements under the widely used single-layer model assumption. To overcome the requirement of a large amount of simultaneous vertical and slant ionospheric observations or dense "coinciding" pierce points data, a new approach comparing the converted vertical TEC (VTEC) value using mapping function based on a given IEH with the "ground truth" VTEC value provided by the combined International GNSS Service Global Ionospheric Maps is proposed for the determination of the optimal IEH. The optimal IEH in the Chinese region is determined using three different methods based on GNSS data. Based on the ionosonde data from three different locations in China, the altitude variation of the peak electron density (hmF2) is found to have clear diurnal, seasonal and latitudinal dependences, and the diurnal variation of hmF2 varies from approximately 210 to 520 km in Hainan. The determination of the optimal IEH employing the inverse method suggested by Birch et al. (Radio Sci 37, 2002. doi: 10.1029/2000rs002601) did not yield a consistent altitude in the Chinese region. Tests of the method minimizing the mapping function errors suggested by Nava et al. (Adv Space Res 39:1292-1297, 2007) indicate that the optimal IEH ranges from 400 to 600 km, and the height of 450 km is the most frequent IEH at both high and low solar activities. It is also confirmed that the IEH of 450-550 km is preferred for the Chinese region instead of the commonly adopted 350-450 km using the determination method of the optimal IEH proposed in this paper.

  1. Aerostructural Shape and Topology Optimization of Aircraft Wings

    NASA Astrophysics Data System (ADS)

    James, Kai

    A series of novel algorithms for performing aerostructural shape and topology optimization are introduced and applied to the design of aircraft wings. An isoparametric level set method is developed for performing topology optimization of wings and other non-rectangular structures that must be modeled using a non-uniform, body-fitted mesh. The shape sensitivities are mapped to computational space using the transformation defined by the Jacobian of the isoparametric finite elements. The mapped sensitivities are then passed to the Hamilton-Jacobi equation, which is solved on a uniform Cartesian grid. The method is derived for several objective functions including mass, compliance, and global von Mises stress. The results are compared with SIMP results for several two-dimensional benchmark problems. The method is also demonstrated on a three-dimensional wingbox structure subject to fixed loading. It is shown that the isoparametric level set method is competitive with the SIMP method in terms of the final objective value as well as computation time. In a separate problem, the SIMP formulation is used to optimize the structural topology of a wingbox as part of a larger MDO framework. Here, topology optimization is combined with aerodynamic shape optimization, using a monolithic MDO architecture that includes aerostructural coupling. The aerodynamic loads are modeled using a three-dimensional panel method, and the structural analysis makes use of linear, isoparametric, hexahedral elements. The aerodynamic shape is parameterized via a set of twist variables representing the jig twist angle at equally spaced locations along the span of the wing. The sensitivities are determined analytically using a coupled adjoint method. The wing is optimized for minimum drag subject to a compliance constraint taken from a 2 g maneuver condition. The results from the MDO algorithm are compared with those of a sequential optimization procedure in order to quantify the benefits of the MDO approach. While the sequentially optimized wing exhibits a nearly-elliptical lift distribution, the MDO design seeks to push a greater portion of the load toward the root, thus reducing the structural deflection, and allowing for a lighter structure. By exploiting this trade-off, the MDO design achieves a 42% lower drag than the sequential result.

  2. Direct estimation of tracer-kinetic parameter maps from highly undersampled brain dynamic contrast enhanced MRI.

    PubMed

    Guo, Yi; Lingala, Sajan Goud; Zhu, Yinghua; Lebel, R Marc; Nayak, Krishna S

    2017-10-01

    The purpose of this work was to develop and evaluate a T 1 -weighted dynamic contrast enhanced (DCE) MRI methodology where tracer-kinetic (TK) parameter maps are directly estimated from undersampled (k,t)-space data. The proposed reconstruction involves solving a nonlinear least squares optimization problem that includes explicit use of a full forward model to convert parameter maps to (k,t)-space, utilizing the Patlak TK model. The proposed scheme is compared against an indirect method that creates intermediate images by parallel imaging and compressed sensing before to TK modeling. Thirteen fully sampled brain tumor DCE-MRI scans with 5-second temporal resolution are retrospectively undersampled at rates R = 20, 40, 60, 80, and 100 for each dynamic frame. TK maps are quantitatively compared based on root mean-squared-error (rMSE) and Bland-Altman analysis. The approach is also applied to four prospectively R = 30 undersampled whole-brain DCE-MRI data sets. In the retrospective study, the proposed method performed statistically better than indirect method at R ≥ 80 for all 13 cases. This approach provided restoration of TK parameter values with less errors in tumor regions of interest, an improvement compared to a state-of-the-art indirect method. Applied prospectively, the proposed method provided whole-brain, high-resolution TK maps with good image quality. Model-based direct estimation of TK maps from k,t-space DCE-MRI data is feasible and is compatible up to 100-fold undersampling. Magn Reson Med 78:1566-1578, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. Heuristic approach to image registration

    NASA Astrophysics Data System (ADS)

    Gertner, Izidor; Maslov, Igor V.

    2000-08-01

    Image registration, i.e. correct mapping of images obtained from different sensor readings onto common reference frame, is a critical part of multi-sensor ATR/AOR systems based on readings from different types of sensors. In order to fuse two different sensor readings of the same object, the readings have to be put into a common coordinate system. This task can be formulated as optimization problem in a space of all possible affine transformations of an image. In this paper, a combination of heuristic methods is explored to register gray- scale images. The modification of Genetic Algorithm is used as the first step in global search for optimal transformation. It covers the entire search space with (randomly or heuristically) scattered probe points and helps significantly reduce the search space to a subspace of potentially most successful transformations. Due to its discrete character, however, Genetic Algorithm in general can not converge while coming close to the optimum. Its termination point can be specified either as some predefined number of generations or as achievement of a certain acceptable convergence level. To refine the search, potential optimal subspaces are searched using more delicate and efficient for local search Taboo and Simulated Annealing methods.

  4. A constraint optimization based virtual network mapping method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen

    2013-03-01

    Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.

  5. Reentry trajectories of a space glider, taking acceleration and heating constraints into account

    NASA Astrophysics Data System (ADS)

    Strauss, Adi

    1988-03-01

    Three-dimensional trajectories for aerodynamically controlled reentry of an unpowered Space Shuttle-type vehicle from equatorial orbit are investigated analytically, summarizing the results obtained in the author's thesis (Strauss, 1987). Computer programs constructed on the basis of the governing equations of Chern and Yang (1982) and Chern and Vinh (1980) in modified dimensionless Chapman variables are used to optimize the roll angle and lift coefficient of the trajectories. Typical results are presented in graphs and maps and shown to be in good agreement with AVION SPATIAL predictions for the ESA Hermes spacecraft.

  6. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  7. Ant Colony Optimization for Mapping, Scheduling and Placing in Reconfigurable Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrandi, Fabrizio; Lanzi, Pier Luca; Pilato, Christian

    Modern heterogeneous embedded platforms, com- posed of several digital signal, application specific and general purpose processors, also include reconfigurable devices support- ing partial dynamic reconfiguration. These devices can change the behavior of some of their parts during execution, allowing hardware acceleration of more sections of the applications. Never- theless, partial dynamic reconfiguration imposes severe overheads in terms of latency. For such systems, a critical part of the design phase is deciding on which processing elements (mapping) and when (scheduling) executing a task, but also how to place them on the reconfigurable device to guarantee the most efficient reuse of themore » programmable logic. In this paper we propose an algorithm based on Ant Colony Optimization (ACO) that simultaneously executes the scheduling, the mapping and the linear placing of tasks, hiding reconfiguration overheads through prefetching. Our heuristic gradually constructs solutions and then searches around the best ones, cutting out non-promising areas of the design space. We show how to consider the partial dynamic reconfiguration constraints in the scheduling, placing and mapping problems and compare our formulation to other heuristics that address the same problems. We demonstrate that our proposal is more general and robust, and finds better solutions (16.5% in average) with respect to competing solutions.« less

  8. Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Volpi, Michele; Copa, Loris

    2010-05-01

    The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of MNO problem: 1) hierarchical top-down clustering in an input space in order to remove redundancy when data are clustered, and 2) a general method (independent on classifier) which gives posterior probabilities that can be used to define the classifier confidence and corresponding proposals for new measurement points. The basic ideas and procedures are explained by applying simulated data sets. The real case study deals with the analysis and mapping of soil types, which is a multi-class classification problem. Maps of soil types are important for the analysis and 3D modeling of heavy metals migration in soil and prediction risk mapping. The results obtained demonstrate the high quality of SVM mapping and efficiency of monitoring network optimization by using active learning approaches. The research was partly supported by SNSF projects No. 200021-126505 and 200020-121835.

  9. A comparison of design variables for control theory based airfoil optimization

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work in the area it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using either the potential flow or the Euler equations with either a conformal mapping or a general coordinate system. We have also explored three-dimensional extensions of these formulations recently. The goal of our present work is to demonstrate the versatility of the control theory approach by designing airfoils using both Hicks-Henne functions and B-spline control points as design variables. The research also demonstrates that the parameterization of the design space is an open question in aerodynamic design.

  10. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    PubMed

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.

  11. DD-HDS: A method for visualization and exploration of high-dimensional data.

    PubMed

    Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard

    2007-09-01

    Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.

  12. Coordinated Optimization of Visual Cortical Maps (I) Symmetry-based Analysis

    PubMed Central

    Reichl, Lars; Heide, Dominik; Löwel, Siegrid; Crowley, Justin C.; Kaschube, Matthias; Wolf, Fred

    2012-01-01

    In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps. PMID:23144599

  13. Real-World Application of Robust Design Optimization Assisted by Response Surface Approximation and Visual Data-Mining

    NASA Astrophysics Data System (ADS)

    Shimoyama, Koji; Jeong, Shinkyu; Obayashi, Shigeru

    A new approach for multi-objective robust design optimization was proposed and applied to a real-world design problem with a large number of objective functions. The present approach is assisted by response surface approximation and visual data-mining, and resulted in two major gains regarding computational time and data interpretation. The Kriging model for response surface approximation can markedly reduce the computational time for predictions of robustness. In addition, the use of self-organizing maps as a data-mining technique allows visualization of complicated design information between optimality and robustness in a comprehensible two-dimensional form. Therefore, the extraction and interpretation of trade-off relations between optimality and robustness of design, and also the location of sweet spots in the design space, can be performed in a comprehensive manner.

  14. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    PubMed

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2018-06-01

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  15. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  16. Simulating multiprimary LCDs on standard tri-stimulus LC displays

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz; Vonneilich, Katrin; Bonse, Thomas

    2008-01-01

    Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use subpixel structures with more than three subpixels to implement a multi-primary display with up to six primaries. Since their input color space is likely to remain tri-stimulus RGB we first focus on some fundamental constraints. Among them, we elaborate simplified gamut mapping architectures as well as color filter geometry, transparency, and chromaticity coordinates in color space. Based on a 'display centric' RGB color space tetrahedrization combined with linear interpolation we describe a simulation framework which enables optimization for up to 7 primaries. We evaluated the performance through mapping the multi-primary design back onto a RGB LC display gamut without building a prototype multi-primary display. As long as we kept the RGB equivalent output signal within the display gamut we could analyze all desirable multi-primary configurations with regard to colorimetric variance and visually perceived quality. Not only does our simulation tool enable us to verify a novel concept it also demonstrates how carefully one needs to design a multiprimary display for LCD TV applications.

  17. Mapping the pharmaceutical design space by amorphous ionic liquid strategies.

    PubMed

    Wiest, Johannes; Saedtler, Marco; Balk, Anja; Merget, Benjamin; Widmer, Toni; Bruhn, Heike; Raccuglia, Marc; Walid, Elbast; Picard, Franck; Stopper, Helga; Dekant, Wolfgang; Lühmann, Tessa; Sotriffer, Christoph; Galli, Bruno; Holzgrabe, Ulrike; Meinel, Lorenz

    2017-12-28

    Poor water solubility of drugs fuels complex formulations and jeopardizes patient access to medication. Simplifying these complexities we systematically synthesized a library of 36 sterically demanding counterions and mapped the pharmaceutical design space for amorphous ionic liquid strategies for Selurampanel, a poorly water soluble drug used against migraine. Patients would benefit from a rapid uptake after oral administration to alleviate migraine symptoms. Therefore, we probed the ionic liquids for the flux, supersaturation period and hygroscopicity leading to algorithms linking molecular counterion descriptors to predicted pharmaceutical outcome. By that, 30- or 800-fold improvements of the supersaturation period and fluxes were achieved as were immediate to sustained release profiles through structural counterions' optimization compared to the crystalline free acid of Selurampanel. Guided by ionic liquid structure, in vivo profiles ranged from rapid bioavailability and high maximal plasma concentrations to sustained patterns. In conclusion, the study outlined and predicted the accessible pharmaceutical design space of amorphous ionic liquid based and excipient-free formulations pointing to the enormous pharmaceutical potential of ionic liquid designs. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A roadmap for optimal control: the right way to commute.

    PubMed

    Ross, I Michael

    2005-12-01

    Optimal control theory is the foundation for many problems in astrodynamics. Typical examples are trajectory design and optimization, relative motion control of distributed space systems and attitude steering. Many such problems in astrodynamics are solved by an alternative route of mathematical analysis and deep physical insight, in part because of the perception that an optimal control framework generates hard problems. Although this is indeed true of the Bellman and Pontryagin frameworks, the covector mapping principle provides a neoclassical approach that renders hard problems easy. That is, although the origins of this philosophy can be traced back to Bernoulli and Euler, it is essentially modern as a result of the strong linkage between approximation theory, set-valued analysis and computing technology. Motivated by the broad success of this approach, mission planners are now conceiving and demanding higher performance from space systems. This has resulted in new set of theoretical and computational problems. Recently, under the leadership of NASA-GRC, several workshops were held to address some of these problems. This paper outlines the theoretical issues stemming from practical problems in astrodynamics. Emphasis is placed on how it pertains to advanced mission design problems.

  19. Chemical Space Mapping and Structure-Activity Analysis of the ChEMBL Antiviral Compound Set.

    PubMed

    Klimenko, Kyrylo; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre

    2016-08-22

    Curation, standardization and data fusion of the antiviral information present in the ChEMBL public database led to the definition of a robust data set, providing an association of antiviral compounds to seven broadly defined antiviral activity classes. Generative topographic mapping (GTM) subjected to evolutionary tuning was then used to produce maps of the antiviral chemical space, providing an optimal separation of compound families associated with the different antiviral classes. The ability to pinpoint the specific spots occupied (responsibility patterns) on a map by various classes of antiviral compounds opened the way for a GTM-supported search for privileged structural motifs, typical for each antiviral class. The privileged locations of antiviral classes were analyzed in order to highlight underlying privileged common structural motifs. Unlike in classical medicinal chemistry, where privileged structures are, almost always, predefined scaffolds, privileged structural motif detection based on GTM responsibility patterns has the decisive advantage of being able to automatically capture the nature ("resolution detail"-scaffold, detailed substructure, pharmacophore pattern, etc.) of the relevant structural motifs. Responsibility patterns were found to represent underlying structural motifs of various natures-from very fuzzy (groups of various "interchangeable" similar scaffolds), to the classical scenario in medicinal chemistry (underlying motif actually being the scaffold), to very precisely defined motifs (specifically substituted scaffolds).

  20. Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image

    PubMed Central

    Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.

    2014-01-01

    Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681

  1. Coarse analysis of collective behaviors: Bifurcation analysis of the optimal velocity model for traffic jam formation

    NASA Astrophysics Data System (ADS)

    Miura, Yasunari; Sugiyama, Yuki

    2017-12-01

    We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.

  2. Markovian Search Games in Heterogeneous Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    2009-01-01

    We consider how to search for a mobile evader in a large heterogeneous region when sensors are used for detection. Sensors are modeled using probability of detection. Due to environmental effects, this probability will not be constant over the entire region. We map this problem to a graph search problem and, even though deterministic graph search is NP-complete, we derive a tractable, optimal, probabilistic search strategy. We do this by defining the problem as a differential game played on a Markov chain. We prove that this strategy is optimal in the sense of Nash. Simulations of an example problem illustratemore » our approach and verify our claims.« less

  3. Conformational and functional analysis of molecular dynamics trajectories by Self-Organising Maps

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulations are powerful tools to investigate the conformational dynamics of proteins that is often a critical element of their function. Identification of functionally relevant conformations is generally done clustering the large ensemble of structures that are generated. Recently, Self-Organising Maps (SOMs) were reported performing more accurately and providing more consistent results than traditional clustering algorithms in various data mining problems. We present a novel strategy to analyse and compare conformational ensembles of protein domains using a two-level approach that combines SOMs and hierarchical clustering. Results The conformational dynamics of the α-spectrin SH3 protein domain and six single mutants were analysed by MD simulations. The Cα's Cartesian coordinates of conformations sampled in the essential space were used as input data vectors for SOM training, then complete linkage clustering was performed on the SOM prototype vectors. A specific protocol to optimize a SOM for structural ensembles was proposed: the optimal SOM was selected by means of a Taguchi experimental design plan applied to different data sets, and the optimal sampling rate of the MD trajectory was selected. The proposed two-level approach was applied to single trajectories of the SH3 domain independently as well as to groups of them at the same time. The results demonstrated the potential of this approach in the analysis of large ensembles of molecular structures: the possibility of producing a topological mapping of the conformational space in a simple 2D visualisation, as well as of effectively highlighting differences in the conformational dynamics directly related to biological functions. Conclusions The use of a two-level approach combining SOMs and hierarchical clustering for conformational analysis of structural ensembles of proteins was proposed. It can easily be extended to other study cases and to conformational ensembles from other sources. PMID:21569575

  4. Beam orientation optimization for intensity-modulated radiation therapy using mixed integer programming

    NASA Astrophysics Data System (ADS)

    Yang, Ruijie; Dai, Jianrong; Yang, Yong; Hu, Yimin

    2006-08-01

    The purpose of this study is to extend an algorithm proposed for beam orientation optimization in classical conformal radiotherapy to intensity-modulated radiation therapy (IMRT) and to evaluate the algorithm's performance in IMRT scenarios. In addition, the effect of the candidate pool of beam orientations, in terms of beam orientation resolution and starting orientation, on the optimized beam configuration, plan quality and optimization time is also explored. The algorithm is based on the technique of mixed integer linear programming in which binary and positive float variables are employed to represent candidates for beam orientation and beamlet weights in beam intensity maps. Both beam orientations and beam intensity maps are simultaneously optimized in the algorithm with a deterministic method. Several different clinical cases were used to test the algorithm and the results show that both target coverage and critical structures sparing were significantly improved for the plans with optimized beam orientations compared to those with equi-spaced beam orientations. The calculation time was less than an hour for the cases with 36 binary variables on a PC with a Pentium IV 2.66 GHz processor. It is also found that decreasing beam orientation resolution to 10° greatly reduced the size of the candidate pool of beam orientations without significant influence on the optimized beam configuration and plan quality, while selecting different starting orientations had large influence. Our study demonstrates that the algorithm can be applied to IMRT scenarios, and better beam orientation configurations can be obtained using this algorithm. Furthermore, the optimization efficiency can be greatly increased through proper selection of beam orientation resolution and starting beam orientation while guaranteeing the optimized beam configurations and plan quality.

  5. Autonomous interplanetary constellation design

    NASA Astrophysics Data System (ADS)

    Chow, Cornelius Channing, II

    According to NASA's integrated space technology roadmaps, space-based infrastructures are envisioned as necessary ingredients to a sustained effort in continuing space exploration. Whether it be for extra-terrestrial habitats, roving/cargo vehicles, or space tourism, autonomous space networks will provide a vital communications lifeline for both future robotic and human missions alike. Projecting that the Moon will be a bustling hub of activity within a few decades, a near-term opportunity for in-situ infrastructure development is within reach. This dissertation addresses the anticipated need for in-space infrastructure by investigating a general design methodology for autonomous interplanetary constellations; to illustrate the theory, this manuscript presents results from an application to the Earth-Moon neighborhood. The constellation design methodology is formulated as an optimization problem, involving a trajectory design step followed by a spacecraft placement sequence. Modeling the dynamics as a restricted 3-body problem, the investigated design space consists of families of periodic orbits which play host to the constellations, punctuated by arrangements of spacecraft autonomously guided by a navigation strategy called LiAISON (Linked Autonomous Interplanetary Satellite Orbit Navigation). Instead of more traditional exhaustive search methods, a numerical continuation approach is implemented to map the admissible configuration space. In particular, Keller's pseudo-arclength technique is used to follow folding/bifurcating solution manifolds, which are otherwise inaccessible with other parameter continuation schemes. A succinct characterization of the underlying structure of the local, as well as global, extrema is thus achievable with little a priori intuition of the solution space. Furthermore, the proposed design methodology offers benefits in computation speed plus the ability to handle mildly stochastic systems. An application of the constellation design methodology to the restricted Earth-Moon system, reveals optimal pairwise configurations for various L1, L2, and L5 (halo, axial, and vertical) periodic orbit families. Navigation accuracies, ranging from O (10+/-1) meters in position space, are obtained for the optimal Earth-Moon constellations, given measurement noise on the order of 1 meter.

  6. Bounded state variables and the calculus of variations

    NASA Technical Reports Server (NTRS)

    Hanafy, L. M.

    1972-01-01

    An optimal control problem with bounded state variables is transformed into a Lagrange problem by means of differentiable mappings which take some Euclidean space onto the control and state regions. Whereas all such mappings lead to a Lagrange problem, it is shown that only those which are defined as acceptable pairs of transformations are suitable in the sense that solutions to the transformed Lagrange problem will lead to solutions to the original bounded state problem and vice versa. In particular, an acceptable pair of transformations is exhibited for the case when the control and state regions are right parallelepipeds. Finally, a description of the necessary conditions for the bounded state problem which were obtained by this method is given.

  7. Perspective: Composition–structure–property mapping in high-throughput experiments: Turning data into knowledge

    DOE PAGES

    Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad

    2016-05-26

    With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less

  8. WFIRST: Resolving the Milky Way Galaxy

    NASA Astrophysics Data System (ADS)

    Kalirai, Jason; Conroy, Charlie; Dressler, Alan; Geha, Marla; Levesque, Emily; Lu, Jessica; Tumlinson, Jason

    2018-01-01

    WFIRST will yield a transformative impact in measuring and characterizing resolved stellar populations in the Milky Way. The proximity and level of detail that such populations need to be studied at directly map to all three pillars of WFIRST capabilities - sensitivity from a 2.4 meter space based telescope, resolution from 0.1" pixels, and large 0.3 degree field of view from multiple detectors. In this poster, we describe the activities of the WFIRST Science Investigation Team (SIT), "Resolving the Milky Way with WFIRST". Notional programs guiding our analysis include targeting sightlines to establish the first well-resolved large scale maps of the Galactic bulge aand central region, pockets of star formation in the disk, benchmark star clusters, and halo substructure and ultra faint dwarf satellites. As an output of this study, our team is building optimized strategies and tools to maximize stellar population science with WFIRST. This will include: new grids of IR-optimized stellar evolution and synthetic spectroscopic models; pipelines and algorithms for optimal data reduction at the WFIRST sensitivity and pixel scale; wide field simulations of Milky Way environments including new astrometric studies; and strategies and automated algorithms to find substructure and dwarf galaxies in the Milky Way through the WFIRST High Latitude Survey.

  9. On optimal fuzzy best proximity coincidence points of fuzzy order preserving proximal Ψ(σ, α)-lower-bounding asymptotically contractive mappings in non-Archimedean fuzzy metric spaces.

    PubMed

    De la Sen, Manuel; Abbas, Mujahid; Saleem, Naeem

    2016-01-01

    This paper discusses some convergence properties in fuzzy ordered proximal approaches defined by [Formula: see text]-sequences of pairs, where [Formula: see text] is a surjective self-mapping and [Formula: see text] where Aand Bare nonempty subsets of and abstract nonempty set X and [Formula: see text] is a partially ordered non-Archimedean fuzzy metric space which is endowed with a fuzzy metric M, a triangular norm * and an ordering [Formula: see text] The fuzzy set M takes values in a sequence or set [Formula: see text] where the elements of the so-called switching rule [Formula: see text] are defined from [Formula: see text] to a subset of [Formula: see text] Such a switching rule selects a particular realization of M at the nth iteration and it is parameterized by a growth evolution sequence [Formula: see text] and a sequence or set [Formula: see text] which belongs to the so-called [Formula: see text]-lower-bounding mappings which are defined from [0, 1] to [0, 1]. Some application examples concerning discrete systems under switching rules and best approximation solvability of algebraic equations are discussed.

  10. UltraColor: a new gamut-mapping strategy

    NASA Astrophysics Data System (ADS)

    Spaulding, Kevin E.; Ellson, Richard N.; Sullivan, James R.

    1995-04-01

    Many color calibration and enhancement strategies exist for digital systems. Typically, these approaches are optimized to work well with one class of images, but may produce unsatisfactory results for other types of images. For example, a colorimetric strategy may work well when printing photographic scenes, but may give inferior results for business graphic images because of device color gamut limitations. On the other hand, a color enhancement strategy that works well for business graphics images may distort the color reproduction of skintones and other important photographic colors. This paper describes a method for specifying different color mapping strategies in various regions of color space, while providing a mechanism for smooth transitions between the different regions. The method involves a two step process: (1) constraints are applied so some subset of the points in the input color space explicitly specifying the color mapping function; (2) the color mapping for the remainder of the color values is then determined using an interpolation algorithm that preserves continuity and smoothness. The interpolation algorithm that was developed is based on a computer graphics morphing technique. This method was used to develop the UltraColor gamut mapping strategy, which combines a colorimetric mapping for colors with low saturation levels, with a color enhancement technique for colors with high saturation levels. The result is a single color transformation that produces superior quality for all classes of imagery. UltraColor has been incorporated in several models of Kodak printers including the Kodak ColorEase PS and the Kodak XLS 8600 PS thermal dye sublimation printers.

  11. Fixed point theorems for generalized α -β-weakly contraction mappings in metric spaces and applications.

    PubMed

    Latif, Abdul; Mongkolkeha, Chirasak; Sintunavarat, Wutiphol

    2014-01-01

    We extend the notion of generalized weakly contraction mappings due to Choudhury et al. (2011) to generalized α-β-weakly contraction mappings. We show with examples that our new class of mappings is a real generalization of several known classes of mappings. We also establish fixed point results for such mappings in metric spaces. Applying our new results, we obtain fixed point results on ordinary metric spaces, metric spaces endowed with an arbitrary binary relation, and metric spaces endowed with graph.

  12. Comparison of Probabilistic Coastal Inundation Maps Based on Historical Storms and Statistically Modeled Storm Ensemble

    NASA Astrophysics Data System (ADS)

    Feng, X.; Sheng, Y.; Condon, A. J.; Paramygin, V. A.; Hall, T.

    2012-12-01

    A cost effective method, JPM-OS (Joint Probability Method with Optimal Sampling), for determining storm response and inundation return frequencies was developed and applied to quantify the hazard of hurricane storm surges and inundation along the Southwest FL,US coast (Condon and Sheng 2012). The JPM-OS uses piecewise multivariate regression splines coupled with dimension adaptive sparse grids to enable the generation of a base flood elevation (BFE) map. Storms are characterized by their landfall characteristics (pressure deficit, radius to maximum winds, forward speed, heading, and landfall location) and a sparse grid algorithm determines the optimal set of storm parameter combinations so that the inundation from any other storm parameter combination can be determined. The end result is a sample of a few hundred (197 for SW FL) optimal storms which are simulated using a dynamically coupled storm surge / wave modeling system CH3D-SSMS (Sheng et al. 2010). The limited historical climatology (1940 - 2009) is explored to develop probabilistic characterizations of the five storm parameters. The probability distributions are discretized and the inundation response of all parameter combinations is determined by the interpolation in five-dimensional space of the optimal storms. The surge response and the associated joint probability of the parameter combination is used to determine the flood elevation with a 1% annual probability of occurrence. The limited historical data constrains the accuracy of the PDFs of the hurricane characteristics, which in turn affect the accuracy of the BFE maps calculated. To offset the deficiency of limited historical dataset, this study presents a different method for producing coastal inundation maps. Instead of using the historical storm data, here we adopt 33,731 tracks that can represent the storm climatology in North Atlantic basin and SW Florida coasts. This large quantity of hurricane tracks is generated from a new statistical model which had been used for Western North Pacific (WNP) tropical cyclone (TC) genesis (Hall 2011) as well as North Atlantic tropical cyclone genesis (Hall and Jewson 2007). The introduction of these tracks complements the shortage of the historical samples and allows for more reliable PDFs required for implementation of JPM-OS. Using the 33,731 tracks and JPM-OS, an optimal storm ensemble is determined. This approach results in different storms/winds for storm surge and inundation modeling, and produces different Base Flood Elevation maps for coastal regions. Coastal inundation maps produced by the two different methods will be discussed in detail in the poster paper.

  13. Seabird aggregative patterns: a new tool for offshore wind energy risk assessment.

    PubMed

    Christel, Isadora; Certain, Grégoire; Cama, Albert; Vieites, David R; Ferrer, Xavier

    2013-01-15

    The emerging development of offshore wind energy has raised public concern over its impact on seabird communities. There is a need for an adequate methodology to determine its potential impacts on seabirds. Environmental Impact Assessments (EIAs) are mostly relying on a succession of plain density maps without integrated interpretation of seabird spatio-temporal variability. Using Taylor's power law coupled with mixed effect models, the spatio-temporal variability of species' distributions can be synthesized in a measure of the aggregation levels of individuals over time and space. Applying the method to a seabird aerial survey in the Ebro Delta, NW Mediterranean Sea, we were able to make an explicit distinction between transitional and feeding areas to define and map the potential impacts of an offshore wind farm project. We use the Ebro Delta study case to discuss the advantages of potential impacts maps over density maps, as well as to illustrate how these potential impact maps can be applied to inform on concern levels, optimal EIA design and monitoring in the assessment of local offshore wind energy projects. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Automated map sharpening by maximization of detail and connectivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  15. Automated map sharpening by maximization of detail and connectivity

    DOE PAGES

    Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...

    2018-05-18

    An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less

  16. Debris mapping sensor technology project summary: Technology flight experiments program area of the space platforms technology program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The topics presented are covered in viewgraph form. Programmatic objectives are: (1) to improve characterization of the orbital debris environment; and (2) to provide a passive sensor test bed for debris collision detection systems. Technical objectives are: (1) to study LEO debris altitude, size and temperature distribution down to 1 mm particles; (2) to quantify ground based radar and optical data ambiguities; and (3) to optimize debris detection strategies.

  17. BRIC-17 Mapping Spaceflight-Induced Hypoxic Signaling and Response in Plants

    NASA Technical Reports Server (NTRS)

    Gilroy, Simon; Choi, Won-Gyu; Swanson, Sarah

    2012-01-01

    Goals of this work are: (1) Define global changes in gene expression patterns in Arabidopsis plants grown in microgravity using whole genome microarrays (2) Compare to mutants resistant to low oxygen challenge using whole genome microarrays Also measuring root and shoot size Outcomes from this research are: (1) Provide fundamental information on plant responses to the stresses inherent in spaceflight (2) Potential for informing on genetic strategies to engineer plants for optimal growth in space

  18. Method for hue plane preserving color correction.

    PubMed

    Mackiewicz, Michal; Andersen, Casper F; Finlayson, Graham

    2016-11-01

    Hue plane preserving color correction (HPPCC), introduced by Andersen and Hardeberg [Proceedings of the 13th Color and Imaging Conference (CIC) (2005), pp. 141-146], maps device-dependent color values (RGB) to colorimetric color values (XYZ) using a set of linear transforms, realized by white point preserving 3×3 matrices, where each transform is learned and applied in a subregion of color space, defined by two adjacent hue planes. The hue plane delimited subregions of camera RGB values are mapped to corresponding hue plane delimited subregions of estimated colorimetric XYZ values. Hue planes are geometrical half-planes, where each is defined by the neutral axis and a chromatic color in a linear color space. The key advantage of the HPPCC method is that, while offering an estimation accuracy of higher order methods, it maintains the linear colorimetric relations of colors in hue planes. As a significant result, it therefore also renders the colorimetric estimates invariant to exposure and shading of object reflection. In this paper, we present a new flexible and robust version of HPPCC using constrained least squares in the optimization, where the subregions can be chosen freely in number and position in order to optimize the results while constraining transform continuity at the subregion boundaries. The method is compared to a selection of other state-of-the-art characterization methods, and the results show that it outperforms the original HPPCC method.

  19. A methodology for the generation of the 2-D map from unknown navigation environment by traveling a short distance

    NASA Technical Reports Server (NTRS)

    Bourbakis, N.; Sarkar, D.

    1994-01-01

    A technique for generation of a 2-D space map by traveling a short distance is described. The space to be mapped can be classified as: (1) space without obstacles, (2) space with stationary obstacles, and (3) space with moving obstacles. This paper presents the methodology used to generate a 2-D map of an unknown navigation space. The ability to minimize the redundancy during traveling and maximize the confidence function for generation of the map are advantages of this technique.

  20. Indexing a sequence for mapping reads with a single mismatch.

    PubMed

    Crochemore, Maxime; Langiu, Alessio; Rahman, M Sohel

    2014-05-28

    Mapping reads against a genome sequence is an interesting and useful problem in computational molecular biology and bioinformatics. In this paper, we focus on the problem of indexing a sequence for mapping reads with a single mismatch. We first focus on a simpler problem where the length of the pattern is given beforehand during the data structure construction. This version of the problem is interesting in its own right in the context of the next generation sequencing. In the sequel, we show how to solve the more general problem. In both cases, our algorithm can construct an efficient data structure in O(n log(1+ε) n) time and space and can answer subsequent queries in O(m log log n + K) time. Here, n is the length of the sequence, m is the length of the read, 0<ε<1 and is the optimal output size.

  1. Real-space refinement in PHENIX for cryo-EM and crystallography

    DOE PAGES

    Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.; ...

    2018-06-01

    This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less

  2. Real-space refinement in PHENIX for cryo-EM and crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.

    This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less

  3. Texture mapping via optimal mass transport.

    PubMed

    Dominitz, Ayelet; Tannenbaum, Allen

    2010-01-01

    In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the "earth-mover's metric"). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.

  4. A Fast linking approach for CMYK to CMYK conversion preserving black separation in ICC color management system

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao

    2003-12-01

    In the linking step of the standard ICC color management workflow for CMYK to CMYK conversion, a CMM takes an AToBn tag (n = 0, 1, or 2) from a source ICC profile to convert a color from the source color space to PCS (profile connection space), and then takes a BToAn tag from the destination ICC profile to convert the color from PCS to the destination color space. This approach may give satisfactory result perceptually or colorimetrically. However, it does not preserve the K channel for CMYK to CMYK conversion, which is often required in graphic art"s market. The problem is that the structure of a BtoAn tag is designed to convert colors from PCS to a device color space ignoring the K values from the source color space. Different approaches have been developed to control K in CMYK to CMYK printing, yet none of them well fits into the "Profile - PCS - Profile" model in the ICC color management system. A traditional approach is to transform the source CMYK to the destination CMYK by 1-D TRC curves and GCR/UCR tables. This method is so simple that it cannot accurately transform colors perceptually or colorimetrically. Another method is to build a 4-D CMYK to CMYK closed-loop lookup table (LUT) (or a deviceLink ICC profile) for the color transformation. However, this approach does not fit into opened color management workflows for it ties the source and the destination color spaces in the color characterization step. A specialized CMM may preserve K for a limit number of colors by mapping those CMYK colors to some carefully chosen PCS colors in both the AToBi tag and the BToAi tag. A more complete solution is to move to smart linking in which gamut mapping is performed in the real-time linking at a CMM. This method seems to solve all problems existed in the CMYK to CMYK conversion. However, it introduces new problems: 1) gamut mapping at real-time linking is often unacceptable slow; 2) gamut mapping may not be optimized or may be unreliable; 3) manual adjustment for building high quality maps does not fit to the smart CMM workflow. A new approach is described in this paper to solve these problems. Instead of using a BtoAn tag from the destination profile for color transformation, a new tag is created to map colors in PCS (L*a*b* or XYZ) with different K values to different CMY values. A set of 3-D LUTs for different K values are created for the conversion from PCS to CMY, and 1-D LUTs are created for the conversion from luminance to K and to guide a CMM to perform the interpolation from KPCS (K plus PCS) to CMYK. The gamut mapping is performed in the step to create the profile, thus avoiding realtime gamut mapping in a CMM. With this approach, the black channel is preserved; the "Profile - PCS - Profile" approach is still valid; and the gamut mapping is not performed during linking in a CMM. Therefore, gamut mapping can be manually adjusted for high quality color mapping, the linking is almost as easy and fast as the standard linking, and the black channel is preserved.

  5. Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion

    PubMed Central

    Medendorp, W. P.

    2015-01-01

    It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289

  6. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  7. Knowledge Discovery for Transonic Regional-Jet Wing through Multidisciplinary Design Exploration

    NASA Astrophysics Data System (ADS)

    Chiba, Kazuhisa; Obayashi, Shigeru; Morino, Hiroyuki

    Data mining is an important facet of solving multi-objective optimization problem. Because it is one of the effective manner to discover the design knowledge in the multi-objective optimization problem which obtains large data. In the present study, data mining has been performed for a large-scale and real-world multidisciplinary design optimization (MDO) to provide knowledge regarding the design space. The MDO among aerodynamics, structures, and aeroelasticity of the regional-jet wing was carried out using high-fidelity evaluation models on the adaptive range multi-objective genetic algorithm. As a result, nine non-dominated solutions were generated and used for tradeoff analysis among three objectives. All solutions evaluated during the evolution were analyzed for the tradeoffs and influence of design variables using a self-organizing map to extract key features of the design space. Although the MDO results showed the inverted gull-wings as non-dominated solutions, one of the key features found by data mining was the non-gull wing geometry. When this knowledge was applied to one optimum solution, the resulting design was found to have better performance compared with the original geometry designed in the conventional manner.

  8. A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.

    2017-03-01

    Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.

  9. Uncountably many maximizing measures for a dense subset of continuous functions

    NASA Astrophysics Data System (ADS)

    Shinoda, Mao

    2018-05-01

    Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.

  10. Optimizing Coverage of Three-Dimensional Wireless Sensor Networks by Means of Photon Mapping

    DTIC Science & Technology

    2013-12-01

    information if it does not display a currently valid OMB control number. 1. REPORT DATE DEC 2013 2. REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00...information about the monitored space is sensed?” Solving this formulation of the AGP relies upon the creation of a model describing how a set of...simulated photons will propagate in a 3D virtual environment. Furthermore, the photon model requires an efficient data structure with small memory

  11. Gamut mapping in a high-dynamic-range color space

    NASA Astrophysics Data System (ADS)

    Preiss, Jens; Fairchild, Mark D.; Ferwerda, James A.; Urban, Philipp

    2014-01-01

    In this paper, we present a novel approach of tone mapping as gamut mapping in a high-dynamic-range (HDR) color space. High- and low-dynamic-range (LDR) images as well as device gamut boundaries can simultaneously be represented within such a color space. This enables a unified transformation of the HDR image into the gamut of an output device (in this paper called HDR gamut mapping). An additional aim of this paper is to investigate the suitability of a specific HDR color space to serve as a working color space for the proposed HDR gamut mapping. For the HDR gamut mapping, we use a recent approach that iteratively minimizes an image-difference metric subject to in-gamut images. A psychophysical experiment on an HDR display shows that the standard reproduction workflow of two subsequent transformations - tone mapping and then gamut mapping - may be improved by HDR gamut mapping.

  12. Designing a space-based galaxy redshift survey to probe dark energy

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Percival, Will; Cimatti, Andrea; Mukherjee, Pia; Guzzo, Luigi; Baugh, Carlton M.; Carbone, Carmelita; Franzetti, Paolo; Garilli, Bianca; Geach, James E.; Lacey, Cedric G.; Majerotto, Elisabetta; Orsi, Alvaro; Rosati, Piero; Samushia, Lado; Zamorani, Giovanni

    2010-12-01

    A space-based galaxy redshift survey would have enormous power in constraining dark energy and testing general relativity, provided that its parameters are suitably optimized. We study viable space-based galaxy redshift surveys, exploring the dependence of the Dark Energy Task Force (DETF) figure-of-merit (FoM) on redshift accuracy, redshift range, survey area, target selection and forecast method. Fitting formulae are provided for convenience. We also consider the dependence on the information used: the full galaxy power spectrum P(k), P(k) marginalized over its shape, or just the Baryon Acoustic Oscillations (BAO). We find that the inclusion of growth rate information (extracted using redshift space distortion and galaxy clustering amplitude measurements) leads to a factor of ~3 improvement in the FoM, assuming general relativity is not modified. This inclusion partially compensates for the loss of information when only the BAO are used to give geometrical constraints, rather than using the full P(k) as a standard ruler. We find that a space-based galaxy redshift survey covering ~20000deg2 over with σz/(1 + z) <= 0.001 exploits a redshift range that is only easily accessible from space, extends to sufficiently low redshifts to allow both a vast 3D map of the universe using a single tracer population, and overlaps with ground-based surveys to enable robust modelling of systematic effects. We argue that these parameters are close to their optimal values given current instrumental and practical constraints.

  13. rasbhari: Optimizing Spaced Seeds for Database Searching, Read Mapping and Alignment-Free Sequence Comparison.

    PubMed

    Hahn, Lars; Leimeister, Chris-André; Ounit, Rachid; Lonardi, Stefano; Morgenstern, Burkhard

    2016-10-01

    Many algorithms for sequence analysis rely on word matching or word statistics. Often, these approaches can be improved if binary patterns representing match and don't-care positions are used as a filter, such that only those positions of words are considered that correspond to the match positions of the patterns. The performance of these approaches, however, depends on the underlying patterns. Herein, we show that the overlap complexity of a pattern set that was introduced by Ilie and Ilie is closely related to the variance of the number of matches between two evolutionarily related sequences with respect to this pattern set. We propose a modified hill-climbing algorithm to optimize pattern sets for database searching, read mapping and alignment-free sequence comparison of nucleic-acid sequences; our implementation of this algorithm is called rasbhari. Depending on the application at hand, rasbhari can either minimize the overlap complexity of pattern sets, maximize their sensitivity in database searching or minimize the variance of the number of pattern-based matches in alignment-free sequence comparison. We show that, for database searching, rasbhari generates pattern sets with slightly higher sensitivity than existing approaches. In our Spaced Words approach to alignment-free sequence comparison, pattern sets calculated with rasbhari led to more accurate estimates of phylogenetic distances than the randomly generated pattern sets that we previously used. Finally, we used rasbhari to generate patterns for short read classification with CLARK-S. Here too, the sensitivity of the results could be improved, compared to the default patterns of the program. We integrated rasbhari into Spaced Words; the source code of rasbhari is freely available at http://rasbhari.gobics.de/.

  14. Group-level variations in motor representation areas of thenar and anterior tibial muscles: Navigated Transcranial Magnetic Stimulation Study.

    PubMed

    Niskanen, Eini; Julkunen, Petro; Säisänen, Laura; Vanninen, Ritva; Karjalainen, Pasi; Könönen, Mervi

    2010-08-01

    Navigated transcranial magnetic stimulation (TMS) can be used to stimulate functional cortical areas at precise anatomical location to induce measurable responses. The stimulation has commonly been focused on anatomically predefined motor areas: TMS of that area elicits a measurable muscle response, the motor evoked potential. In clinical pathologies, however, the well-known homunculus somatotopy theory may not be straightforward, and the representation area of the muscle is not fixed. Traditionally, the anatomical locations of TMS stimulations have not been reported at the group level in standard space. This study describes a methodology for group-level analysis by investigating the normal representation areas of thenar and anterior tibial muscle in the primary motor cortex. The optimal representation area for these muscles was mapped in 59 healthy right-handed subjects using navigated TMS. The coordinates of the optimal stimulation sites were then normalized into standard space to determine the representation areas of these muscles at the group-level in healthy subjects. Furthermore, 95% confidence interval ellipsoids were fitted into the optimal stimulation site clusters to define the variation between subjects in optimal stimulation sites. The variation was found to be highest in the anteroposterior direction along the superior margin of the precentral gyrus. These results provide important normative information for clinical studies assessing changes in the functional cortical areas because of plasticity of the brain. Furthermore, it is proposed that the presented methodology to study TMS locations at the group level on standard space will be a suitable tool for research purposes in population studies. 2010 Wiley-Liss, Inc.

  15. Investigation of multidimensional control systems in the state space and wavelet medium

    NASA Astrophysics Data System (ADS)

    Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.

    2018-05-01

    The notions are introduced of “one-dimensional-point” and “multidimensional-point” automatic control systems. To demonstrate the joint use of approaches based on the concepts of state space and wavelet transforms, a method for optimal control in a state space medium represented in the form of time-frequency representations (maps), is considered. The computer-aided control system is formed on the basis of the similarity transformation method, which makes it possible to exclude the use of reduced state variable observers. 1D-material flow signals formed by primary transducers are converted by means of wavelet transformations into multidimensional concentrated-at-a point variables in the form of time-frequency distributions of Cohen’s class. The algorithm for synthesizing a stationary controller for feeding processes is given here. The conclusion is made that the formation of an optimal control law with time-frequency distributions available contributes to the improvement of transient processes quality in feeding subsystems and the mixing unit. Confirming the efficiency of the method presented is illustrated by an example of the current registration of material flows in the multi-feeding unit. The first section in your paper.

  16. Symbolic, Nonsymbolic and Conceptual: An Across-Notation Study on the Space Mapping of Numerals.

    PubMed

    Zhang, Yu; You, Xuqun; Zhu, Rongjuan

    2016-07-01

    Previous studies suggested that there are interconnections between two numeral modalities of symbolic notation and nonsymbolic notation (array of dots), differences and similarities of the processing, and representation of the two modalities have both been found in previous research. However, whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation is still uninvestigated. The present study aims to examine whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation; especially how zero, as both a symbolic magnitude numeral and a nonsymbolic conceptual numeral, mapping onto space; and if the mapping happens automatically at an early stage of the numeral information processing. Results of the two experiments demonstrate that the low-level processing of symbolic numerals including zero and nonsymbolic numerals except zero can mapping onto space, whereas the low-level processing of nonsymbolic zero as a semantic conceptual numeral cannot mapping onto space, which indicating the specialty of zero in the numeral domain. The present study indicates that the processing of non-semantic numerals can mapping onto space, whereas semantic conceptual numerals cannot mapping onto space. © The Author(s) 2016.

  17. Adaptive treatment-length optimization in spatiobiologically integrated radiotherapy

    NASA Astrophysics Data System (ADS)

    Ajdari, Ali; Ghate, Archis; Kim, Minsun

    2018-04-01

    Recent theoretical research on spatiobiologically integrated radiotherapy has focused on optimization models that adapt fluence-maps to the evolution of tumor state, for example, cell densities, as observed in quantitative functional images acquired over the treatment course. We propose an optimization model that adapts the length of the treatment course as well as the fluence-maps to such imaged tumor state. Specifically, after observing the tumor cell densities at the beginning of a session, the treatment planner solves a group of convex optimization problems to determine an optimal number of remaining treatment sessions, and a corresponding optimal fluence-map for each of these sessions. The objective is to minimize the total number of tumor cells remaining (TNTCR) at the end of this proposed treatment course, subject to upper limits on the biologically effective dose delivered to the organs-at-risk. This fluence-map is administered in future sessions until the next image is available, and then the number of sessions and the fluence-map are re-optimized based on the latest cell density information. We demonstrate via computer simulations on five head-and-neck test cases that such adaptive treatment-length and fluence-map planning reduces the TNTCR and increases the biological effect on the tumor while employing shorter treatment courses, as compared to only adapting fluence-maps and using a pre-determined treatment course length based on one-size-fits-all guidelines.

  18. Progress in low-resolution ab initio phasing with CrowdPhase

    DOE PAGES

    Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O.

    2016-03-01

    Ab initio phasing by direct computational methods in low-resolution X-ray crystallography is a long-standing challenge. A common approach is to consider it as two subproblems: sampling of phase space and identification of the correct solution. While the former is amenable to a myriad of search algorithms, devising a reliable target function for the latter problem remains an open question. Here, recent developments in CrowdPhase, a collaborative online game powered by a genetic algorithm that evolves an initial population of individuals with random genetic make-up ( i.e. random phases) each expressing a phenotype in the form of an electron-density map, aremore » presented. Success relies on the ability of human players to visually evaluate the quality of these maps and, following a Darwinian survival-of-the-fittest concept, direct the search towards optimal solutions. While an initial study demonstrated the feasibility of the approach, some important crystallographic issues were overlooked for the sake of simplicity. To address these, the new CrowdPhase includes consideration of space-group symmetry, a method for handling missing amplitudes, the use of a map correlation coefficient as a quality metric and a solvent-flattening step. Lastly, performances of this installment are discussed for two low-resolution test cases based on bona fide diffraction data.« less

  19. Application of Approximate Pattern Matching in Two Dimensional Spaces to Grid Layout for Biochemical Network Maps

    PubMed Central

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486

  20. Application of approximate pattern matching in two dimensional spaces to grid layout for biochemical network maps.

    PubMed

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.

  1. Progress in low-resolution ab initio phasing with CrowdPhase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O.

    Ab initio phasing by direct computational methods in low-resolution X-ray crystallography is a long-standing challenge. A common approach is to consider it as two subproblems: sampling of phase space and identification of the correct solution. While the former is amenable to a myriad of search algorithms, devising a reliable target function for the latter problem remains an open question. Here, recent developments in CrowdPhase, a collaborative online game powered by a genetic algorithm that evolves an initial population of individuals with random genetic make-up ( i.e. random phases) each expressing a phenotype in the form of an electron-density map, aremore » presented. Success relies on the ability of human players to visually evaluate the quality of these maps and, following a Darwinian survival-of-the-fittest concept, direct the search towards optimal solutions. While an initial study demonstrated the feasibility of the approach, some important crystallographic issues were overlooked for the sake of simplicity. To address these, the new CrowdPhase includes consideration of space-group symmetry, a method for handling missing amplitudes, the use of a map correlation coefficient as a quality metric and a solvent-flattening step. Lastly, performances of this installment are discussed for two low-resolution test cases based on bona fide diffraction data.« less

  2. Multiobjective evolutionary algorithm with many tables for purely ab initio protein structure prediction.

    PubMed

    Brasil, Christiane Regina Soares; Delbem, Alexandre Claudio Botazzo; da Silva, Fernando Luís Barroso

    2013-07-30

    This article focuses on the development of an approach for ab initio protein structure prediction (PSP) without using any earlier knowledge from similar protein structures, as fragment-based statistics or inference of secondary structures. Such an approach is called purely ab initio prediction. The article shows that well-designed multiobjective evolutionary algorithms can predict relevant protein structures in a purely ab initio way. One challenge for purely ab initio PSP is the prediction of structures with β-sheets. To work with such proteins, this research has also developed procedures to efficiently estimate hydrogen bond and solvation contribution energies. Considering van der Waals, electrostatic, hydrogen bond, and solvation contribution energies, the PSP is a problem with four energetic terms to be minimized. Each interaction energy term can be considered an objective of an optimization method. Combinatorial problems with four objectives have been considered too complex for the available multiobjective optimization (MOO) methods. The proposed approach, called "Multiobjective evolutionary algorithms with many tables" (MEAMT), can efficiently deal with four objectives through the combination thereof, performing a more adequate sampling of the objective space. Therefore, this method can better map the promising regions in this space, predicting structures in a purely ab initio way. In other words, MEAMT is an efficient optimization method for MOO, which explores simultaneously the search space as well as the objective space. MEAMT can predict structures with one or two domains with RMSDs comparable to values obtained by recently developed ab initio methods (GAPFCG , I-PAES, and Quark) that use different levels of earlier knowledge. Copyright © 2013 Wiley Periodicals, Inc.

  3. Gender and social geography: impact on Lady Health Workers mobility in Pakistan.

    PubMed

    Mumtaz, Zubia

    2012-10-16

    In Pakistan, where gendered norms restrict women's mobility, female community health workers (CHWs) provide doorstep primary health services to home-bound women. The program has not achieved optimal functioning. One reason, I argue, may be that the CHWs are unable to make home visits because they have to operate within the same gender system that necessitated their appointment in the first place. Ethnographic research shows that women's mobility in Pakistan is determined not so much by physical geography as by social geography (the analysis of social phenomena in space). Irrespective of physical location, the presence of biradaria members (extended family) creates a socially acceptable 'inside space' to which women are limited. The presence of a non-biradari person, especially a man, transforms any space into an 'outside space', forbidden space. This study aims to understand how these cultural norms affect CHWs' home-visit rates and the quality of services delivered. Data will be collected in district Attock, Punjab. Twenty randomly selected CHWs will first be interviewed to explore their experiences of delivering doorstep services in the context of gendered norms that promote women's seclusion. Each CHW will be requested to draw a map of her catchment area using social mapping techniques. These maps will be used to survey women of reproductive age to assess variations in the CHW's home visitation rates and quality of family planning services provided. A sample size of 760 households (38 per CHW) is estimated to have the power to detect, with 95% confidence, households the CHWs do not visit. To explore the role of the larger community in shaping the CHWs mobility experiences, 25 community members will be interviewed and five CHWs observed as they conduct their home visits. The survey data will be merged with the maps to demonstrate if any disjunctures exist between CHWs' social geography and physical geography. Furthermore, the impacts these geographies have on home visitation rates and quality of services delivered will be explored. The study will provide generic and theoretical insights into how the CHW program policies and operations can improve working conditions to facilitate the work of female staff in order to ultimately provide high-quality services.

  4. Mapping Children--Mapping Space.

    ERIC Educational Resources Information Center

    Pick, Herbert L., Jr.

    Research is underway concerning the way the perception, conception, and representation of spatial layout develops. Three concepts are important here--space itself, frame of reference, and cognitive map. Cognitive map refers to a form of representation of the behavioral space, not paired associate or serial response learning. Other criteria…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ureba, A.; Salguero, F. J.; Barbeiro, A. R.

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reducemore » the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. Conclusions: A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.« less

  6. Diffusion radiomics analysis of intratumoral heterogeneity in a murine prostate cancer model following radiotherapy: Pixelwise correlation with histology.

    PubMed

    Lin, Yu-Chun; Lin, Gigin; Hong, Ji-Hong; Lin, Yi-Ping; Chen, Fang-Hsin; Ng, Shu-Hang; Wang, Chun-Chieh

    2017-08-01

    To investigate the biological meaning of apparent diffusion coefficient (ADC) values in tumors following radiotherapy. Five mice bearing TRAMP-C1 tumor were half-irradiated with a dose of 15 Gy. Diffusion-weighted images, using multiple b-values from 0 to 3000 s/mm 2 , were acquired at 7T on day 6. ADC values calculated by a two-point estimate and monoexponential fitting of signal decay were compared between the irradiated and nonirradiated regions of the tumor. Pixelwise ADC maps were correlated with histological metrics including nuclear counts, nuclear sizes, nuclear spaces, cytoplasmic spaces, and extracellular spaces. As compared with the nonirradiated region, the irradiated region exhibited significant increases in ADC, extracellular space, and nuclear size, and a significant decrease in nuclear counts (P < 0.001 for all). Optimal ADC to differentiate the irradiated from nonirradiated regions was achieved at a b-value of 800 s/mm 2 by the two-point method and monoexponential curve fitting. ADC positively correlated with extracellular spaces (r = 0.74) and nuclear sizes (r = 0.72), and negatively correlated with nuclear counts (r = -0.82, P < 0.001 for all). As a radiomic biomarker, ADC maps correlating with histological metrics pixelwise could be a means of evaluating tumor heterogeneity and responses to radiotherapy. 1 Technical Efficacy: Stage 2 J. MAGN. RESON. IMAGING 2017;46:483-489. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Impact of Spot Size and Spacing on the Quality of Robustly Optimized Intensity Modulated Proton Therapy Plans for Lung Cancer.

    PubMed

    Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei

    2018-06-01

    To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large-spot machine, which gives the planning system more freedom to compensate for the higher sensitivity to uncertainties and interplay effects for lung cancer treatments. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Defining process design space for monoclonal antibody cell culture.

    PubMed

    Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A

    2010-08-15

    The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.

  9. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  10. Optimal reorientation of asymmetric underactuated spacecraft using differential flatness and receding horizon control

    NASA Astrophysics Data System (ADS)

    Cai, Wei-wei; Yang, Le-ping; Zhu, Yan-wei

    2015-01-01

    This paper presents a novel method integrating nominal trajectory optimization and tracking for the reorientation control of an underactuated spacecraft with only two available control torque inputs. By employing a pseudo input along the uncontrolled axis, the flatness property of a general underactuated spacecraft is extended explicitly, by which the reorientation trajectory optimization problem is formulated into the flat output space with all the differential constraints eliminated. Ultimately, the flat output optimization problem is transformed into a nonlinear programming problem via the Chebyshev pseudospectral method, which is improved by the conformal map and barycentric rational interpolation techniques to overcome the side effects of the differential matrix's ill-conditions on numerical accuracy. Treating the trajectory tracking control as a state regulation problem, we develop a robust closed-loop tracking control law using the receding-horizon control method, and compute the feedback control at each control cycle rapidly via the differential transformation method. Numerical simulation results show that the proposed control scheme is feasible and effective for the reorientation maneuver.

  11. Local Free-Space Mapping and Path Guidance for Mobile Robots.

    DTIC Science & Technology

    1988-03-01

    CM a CD U 00 Technical Document 1227 March 1988 Local Free- Space Mapping o and Path Guidance for Mobile Robots o William T. Gex N’% Nancy L. Campbell...TITLE (inludvSeocutCl&sas~o*) Local Free- Space Mapping and Path Guidance for Mobile Robots 12. PERSONAL AUTHOR(S) William T. Gex and Nancy L...Description of Robot System... 2 Free- Space Mapping ... 4 Map Construction ... 4 . ,12pping Examplk... 5 ’ft Sensor Unreliability... 8 % Path Guidance

  12. Seamless Warping of Diffusion Tensor Fields

    PubMed Central

    Hao, Xuejun; Bansal, Ravi; Plessen, Kerstin J.; Peterson, Bradley S.

    2008-01-01

    To warp diffusion tensor fields accurately, tensors must be reoriented in the space to which the tensors are warped based on both the local deformation field and the orientation of the underlying fibers in the original image. Existing algorithms for warping tensors typically use forward mapping deformations in an attempt to ensure that the local deformations in the warped image remains true to the orientation of the underlying fibers; forward mapping, however, can also create “seams” or gaps and consequently artifacts in the warped image by failing to define accurately the voxels in the template space where the magnitude of the deformation is large (e.g., |Jacobian| > 1). Backward mapping, in contrast, defines voxels in the template space by mapping them back to locations in the original imaging space. Backward mapping allows every voxel in the template space to be defined without the creation of seams, including voxels in which the deformation is extensive. Backward mapping, however, cannot reorient tensors in the template space because information about the directional orientation of fiber tracts is contained in the original, unwarped imaging space only, and backward mapping alone cannot transfer that information to the template space. To combine the advantages of forward and backward mapping, we propose a novel method for the spatial normalization of diffusion tensor (DT) fields that uses a bijection (a bidirectional mapping with one-to-one correspondences between image spaces) to warp DT datasets seamlessly from one imaging space to another. Once the bijection has been achieved and tensors have been correctly relocated to the template space, we can appropriately reorient tensors in the template space using a warping method based on Procrustean estimation. PMID:18334425

  13. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  14. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  15. Stochastic Optimal Control via Bellman's Principle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Sun, Jian Q.

    2003-01-01

    This paper presents a method for finding optimal controls of nonlinear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in a region of the state space are obtained. The approach is based on Bellman's Principle of optimality, the Gaussian closure and the Short-time Gaussian approximation. Examples include a system with a state-dependent diffusion term, a system in which the infinite hierarchy of moment equations cannot be analytically closed, and an impact system with a elastic boundary. The uncontrolled and controlled dynamics are studied by creating a Markov chain with a control dependent transition probability matrix via the Generalized Cell Mapping method. In this fashion, both the transient and stationary controlled responses are evaluated. The results show excellent control performances.

  16. Mathematical Inversion of Lightning Data: Techniques and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William

    2003-01-01

    A survey of some interesting mathematical inversion studies dealing with radio, optical, and electrostatic measurements of lightning are presented. A discussion of why NASA is interested in lightning, what specific physical properties of lightning are retrieved, and what mathematical techniques are used to perform the retrievals are discussed. In particular, a relatively new multi-station VHF time-of-arrival (TOA) antenna network is now on-line in Northern Alabama and will be discussed. The network, called the Lightning Mapping Array (LMA), employs GPS timing and detects VHF radiation from discrete segments (effectively point emitters) that comprise the channel of lightning strokes within cloud and ground flashes. The LMA supports on-going ground-validation activities of the low Earth orbiting Lightning Imaging Sensor (LIS) satellite developed at NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama. The LMA also provides detailed studies of the distribution and evolution of thunderstorms and lightning in the Tennessee Valley, and offers interesting comparisons with other meteorological/geophysical datasets. In order to take full advantage of these benefits, it is essential that the LMA channel mapping accuracy (in both space and time) be fully characterized and optimized. A new channel mapping retrieval algorithm is introduced for this purpose. To characterize the spatial distribution of retrieval errors, the algorithm has been applied to analyze literally tens of millions of computer-simulated lightning VHF point sources that have been placed at various ranges, azimuths, and altitudes relative to the LMA network. Statistical results are conveniently summarized in high-resolution, color-coded, error maps.

  17. XRF map identification problems based on a PDE electrodeposition model

    NASA Astrophysics Data System (ADS)

    Sgura, Ivonne; Bozzini, Benedetto

    2017-04-01

    In this paper we focus on the following map identification problem (MIP): given a morphochemical reaction-diffusion (RD) PDE system modeling an electrodepostion process, we look for a time t *, belonging to the transient dynamics and a set of parameters \\mathbf{p} , such that the PDE solution, for the morphology h≤ft(x,y,{{t}\\ast};\\mathbf{p}\\right) and for the chemistry θ ≤ft(x,y,{{t}\\ast};\\mathbf{p}\\right) approximates a given experimental map M *. Towards this aim, we introduce a numerical algorithm using singular value decomposition (SVD) and Frobenius norm to give a measure of error distance between experimental maps for h and θ and simulated solutions of the RD-PDE system on a fixed time integration interval. The technique proposed allows quantitative use of microspectroscopy images, such as XRF maps. Specifically, in this work we have modelled the morphology and manganese distributions of nanostructured components of innovative batteries and we have followed their changes resulting from ageing under operating conditions. The availability of quantitative information on space-time evolution of active materials in terms of model parameters will allow dramatic improvements in knowledge-based optimization of battery fabrication and operation.

  18. Geospatial Augmented Reality for the interactive exploitation of large-scale walkable orthoimage maps in museums

    NASA Astrophysics Data System (ADS)

    Wüest, Robert; Nebiker, Stephan

    2018-05-01

    In this paper we present an app framework for augmenting large-scale walkable maps and orthoimages in museums or public spaces using standard smartphones and tablets. We first introduce a novel approach for using huge orthoimage mosaic floor prints covering several hundred square meters as natural Augmented Reality (AR) markers. We then present a new app architecture and subsequent tests in the Swissarena of the Swiss National Transport Museum in Lucerne demonstrating the capabilities of accurately tracking and augmenting different map topics, including dynamic 3d data such as live air traffic. The resulting prototype was tested with everyday visitors of the museum to get feedback on the usability of the AR app and to identify pitfalls when using AR in the context of a potentially crowded museum. The prototype is to be rolled out to the public after successful testing and optimization of the app. We were able to show that AR apps on standard smartphone devices can dramatically enhance the interactive use of large-scale maps for different purposes such as education or serious gaming in a museum context.

  19. Route visualization using detail lenses.

    PubMed

    Karnick, Pushpak; Cline, David; Jeschke, Stefan; Razdan, Anshuman; Wonka, Peter

    2010-01-01

    We present a method designed to address some limitations of typical route map displays of driving directions. The main goal of our system is to generate a printable version of a route map that shows the overview and detail views of the route within a single, consistent visual frame. Our proposed visualization provides a more intuitive spatial context than a simple list of turns. We present a novel multifocus technique to achieve this goal, where the foci are defined by points of interest (POI) along the route. A detail lens that encapsulates the POI at a finer geospatial scale is created for each focus. The lenses are laid out on the map to avoid occlusion with the route and each other, and to optimally utilize the free space around the route. We define a set of layout metrics to evaluate the quality of a lens layout for a given route map visualization. We compare standard lens layout methods to our proposed method and demonstrate the effectiveness of our method in generating aesthetically pleasing layouts. Finally, we perform a user study to evaluate the effectiveness of our layout choices.

  20. Radiation Source Mapping with Bayesian Inverse Methods

    DOE PAGES

    Hykes, Joshua M.; Azmy, Yousry Y.

    2017-03-22

    In this work, we present a method to map the spectral and spatial distributions of radioactive sources using a limited number of detectors. Locating and identifying radioactive materials is important for border monitoring, in accounting for special nuclear material in processing facilities, and in cleanup operations following a radioactive material spill. Most methods to analyze these types of problems make restrictive assumptions about the distribution of the source. In contrast, the source mapping method presented here allows an arbitrary three-dimensional distribution in space and a gamma peak distribution in energy. To apply the method, the problem is cast as anmore » inverse problem where the system’s geometry and material composition are known and fixed, while the radiation source distribution is sought. A probabilistic Bayesian approach is used to solve the resulting inverse problem since the system of equations is ill-posed. The posterior is maximized with a Newton optimization method. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint, discrete ordinates flux solutions, obtained in this work by the Denovo code, is required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes form the linear mapping from the state space to the response space. The test of the method’s success is simultaneously locating a set of 137Cs and 60Co gamma sources in a room. This test problem is solved using experimental measurements that we collected for this purpose. Because of the weak sources available for use in the experiment, some of the expected photopeaks were not distinguishable from the Compton continuum. However, by supplanting 14 flawed measurements (out of a total of 69) with synthetic responses computed by MCNP, the proof-of-principle source mapping was successful. The locations of the sources were predicted within 25 cm for two of the sources and 90 cm for the third, in a room with an ~4-x 4-m floor plan. Finally, the predicted source intensities were within a factor of ten of their true value.« less

  1. Can we do better than the grid survey: Optimal synoptic surveys in presence of variable uncertainty and decorrelation scales

    NASA Astrophysics Data System (ADS)

    Frolov, Sergey; Garau, Bartolame; Bellingham, James

    2014-08-01

    Regular grid ("lawnmower") survey is a classical strategy for synoptic sampling of the ocean. Is it possible to achieve a more effective use of available resources if one takes into account a priori knowledge about variability in magnitudes of uncertainty and decorrelation scales? In this article, we develop and compare the performance of several path-planning algorithms: optimized "lawnmower," a graph-search algorithm (A*), and a fully nonlinear genetic algorithm. We use the machinery of the best linear unbiased estimator (BLUE) to quantify the ability of a vehicle fleet to synoptically map distribution of phytoplankton off the central California coast. We used satellite and in situ data to specify covariance information required by the BLUE estimator. Computational experiments showed that two types of sampling strategies are possible: a suboptimal space-filling design (produced by the "lawnmower" and the A* algorithms) and an optimal uncertainty-aware design (produced by the genetic algorithm). Unlike the space-filling designs that attempted to cover the entire survey area, the optimal design focused on revisiting areas of high uncertainty. Results of the multivehicle experiments showed that fleet performance predictors, such as cumulative speed or the weight of the fleet, predicted the performance of a homogeneous fleet well; however, these were poor predictors for comparing the performance of different platforms.

  2. Studying the energy variation in the powered Swing-By in the Sun-Mercury system

    NASA Astrophysics Data System (ADS)

    Ferreira, A. F. S.; Prado, A. F. B. A.; Winter, O. C.; Santos, D. P. S.

    2017-10-01

    A maneuver where a spacecraft passes close to Mercury and uses the gravity of this body combined with an impulse applied at the periapsis, with different magnitudes and directions, is presented. The main objective of this maneuver is the fuel economy in space missions. Using this maneuver, it is possible to insert the spacecraft into an orbit captured around the Sun or Mercury. Trajectories escaping the Solar System are also obtained and mapped. Maps of the spacecraft energy variation relative to the Sun and the types of orbits resulting from the maneuver are presented, based in numerical integrations. The results show that applying the impulse out of the direction of motion can optimize the maneuver due to the effect of the combination of the impulse and the gravity.

  3. Automatic metro map layout using multicriteria optimization.

    PubMed

    Stott, Jonathan; Rodgers, Peter; Martínez-Ovando, Juan Carlos; Walker, Stephen G

    2011-01-01

    This paper describes an automatic mechanism for drawing metro maps. We apply multicriteria optimization to find effective placement of stations with a good line layout and to label the map unambiguously. A number of metrics are defined, which are used in a weighted sum to find a fitness value for a layout of the map. A hill climbing optimizer is used to reduce the fitness value, and find improved map layouts. To avoid local minima, we apply clustering techniques to the map-the hill climber moves both stations and clusters when finding improved layouts. We show the method applied to a number of metro maps, and describe an empirical study that provides some quantitative evidence that automatically-drawn metro maps can help users to find routes more efficiently than either published maps or undistorted maps. Moreover, we have found that, in these cases, study subjects indicate a preference for automatically-drawn maps over the alternatives. © 2011 IEEE Published by the IEEE Computer Society

  4. Mapping the Chevallier-Polarski-Linder parametrization onto physical dark energy Models

    NASA Astrophysics Data System (ADS)

    Scherrer, Robert J.

    2015-08-01

    We examine the Chevallier-Polarski-Linder (CPL) parametrization, in the context of quintessence and barotropic dark energy models, to determine the subset of such models to which it can provide a good fit. The CPL parametrization gives the equation of state parameter w for the dark energy as a linear function of the scale factor a , namely w =w0+wa(1 -a ). In the case of quintessence models, we find that over most of the w0, wa parameter space the CPL parametrization maps onto a fairly narrow form of behavior for the potential V (ϕ ), while a one-dimensional subset of parameter space, for which wa=κ (1 +w0) , with κ constant, corresponds to a wide range of functional forms for V (ϕ ). For barotropic models, we show that the functional dependence of the pressure on the density, up to a multiplicative constant, depends only on wi=wa+w0 and not on w0 and wa separately. Our results suggest that the CPL parametrization may not be optimal for testing either type of model.

  5. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images.

    PubMed

    Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao

    2018-03-01

    We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.

  6. Ball Aerospace Advances in 35 K Cooling-The SB235E Cryocooler

    NASA Astrophysics Data System (ADS)

    Lock, J. S.; Glaister, D. S.; Gully, W.; Hendershott, P.; Marquardt, E.

    2008-03-01

    This paper describes the design, development, testing and performance of the Ball Aerospace & Technologies Corp. SB235E, a 2-stage long life space cryocooler optimized for 2 cooling loads. The SB235E model is designed to provide simultaneous cooling at 35 K (typically for HgCdTe detectors) and 85 K (typically for optics). The SB235E is a higher capacity model derivative of the SB235. Initial testing of the SB235E has shown performance of 2.13 W at 35 K and 8.14 W at 85 K for 200 W power at 289 K rejection temperature. These data equate to Carnot efficiency of 0.175 or nearly twice that of other published space cryocooler data. Qualification testing has been completed including full performance mapping and vibration export. Performance mapping with the cold-stage temperature varying from 20 K to 80 K and mid-stage temperature varying from 85 K to 175 K are presented. Two engineering models of the SB235E are currently in build.

  7. Infrared image enhancement using H(infinity) bounds for surveillance applications.

    PubMed

    Qidwai, Uvais

    2008-08-01

    In this paper, two algorithms have been presented to enhance the infrared (IR) images. Using the autoregressive moving average model structure and H(infinity) optimal bounds, the image pixels are mapped from the IR pixel space into normal optical image space, thus enhancing the IR image for improved visual quality. Although H(infinity)-based system identification algorithms are very common now, they are not quite suitable for real-time applications owing to their complexity. However, many variants of such algorithms are possible that can overcome this constraint. Two such algorithms have been developed and implemented in this paper. Theoretical and algorithmic results show remarkable enhancement in the acquired images. This will help in enhancing the visual quality of IR images for surveillance applications.

  8. Towards mapping of rock walls using a UAV-mounted 2D laser scanner in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Turner, Glen

    In geotechnical engineering, the stability of rock excavations and walls is estimated by using tools that include a map of the orientations of exposed rock faces. However, measuring these orientations by using conventional methods can be time consuming, sometimes dangerous, and is limited to regions of the exposed rock that are reachable by a human. This thesis introduces a 2D, simulated, quadcopter-based rock wall mapping algorithm for GPS denied environments such as underground mines or near high walls on surface. The proposed algorithm employs techniques from the field of robotics known as simultaneous localization and mapping (SLAM) and is a step towards 3D rock wall mapping. Not only are quadcopters agile, but they can hover. This is very useful for confined spaces such as underground or near rock walls. The quadcopter requires sensors to enable self localization and mapping in dark, confined and GPS denied environments. However, these sensors are limited by the quadcopter payload and power restrictions. Because of these restrictions, a light weight 2D laser scanner is proposed. As a first step towards a 3D mapping algorithm, this thesis proposes a simplified scenario in which a simulated 1D laser range finder and 2D IMU are mounted on a quadcopter that is moving on a plane. Because the 1D laser does not provide enough information to map the 2D world from a single measurement, many measurements are combined over the trajectory of the quadcopter. Least Squares Optimization (LSO) is used to optimize the estimated trajectory and rock face for all data collected over the length of a light. Simulation results show that the mapping algorithm developed is a good first step. It shows that by combining measurements over a trajectory, the scanned rock face can be estimated using a lower-dimensional range sensor. A swathing manoeuvre is introduced as a way to promote loop closures within a short time period, thus reducing accumulated error. Some suggestions on how to improve the algorithm are also provided.

  9. Optimization of abdominal fat quantification on CT imaging through use of standardized anatomic space: A novel approach

    PubMed Central

    Tong, Yubing; Udupa, Jayaram K.; Torigian, Drew A.

    2014-01-01

    Purpose: The quantification of body fat plays an important role in the study of numerous diseases. It is common current practice to use the fat area at a single abdominal computed tomography (CT) slice as a marker of the body fat content in studying various disease processes. This paper sets out to answer three questions related to this issue which have not been addressed in the literature. At what single anatomic slice location do the areas of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) estimated from the slice correlate maximally with the corresponding fat volume measures? How does one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? Are there combinations of multiple slices (not necessarily contiguous) whose area sum correlates better with volume than does single slice area with volume? Methods: The authors propose a novel strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. The authors then study the volume-to-area correlations and determine where they become maximal. To address the third issue, the authors carry out similar correlation studies by utilizing two and three slices for calculating area sum. Results: Based on 50 abdominal CT data sets, the proposed mapping achieves significantly improved consistency of anatomic localization compared to current practice. Maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized currently for single slice area estimation as a marker. Conclusions: The maximum area-to-volume correlation achieved is quite high, suggesting that it may be reasonable to estimate body fat by measuring the area of fat from a single anatomic slice at the site of maximum correlation and use this as a marker. The site of maximum correlation is not at L4-L5 as commonly assumed, but is more superiorly located at T12-L1 for SAT and at L3-L4 for VAT. Furthermore, the optimal anatomic locations for SAT and VAT estimation are not the same, contrary to common assumption. The proposed standardized space mapping achieves high consistency of anatomic localization by accurately managing nonlinearities in the relationships among landmarks. Multiple slices achieve greater improvement in correlation for VAT than for SAT. The optimal locations in the case of multiple slices are not contiguous. PMID:24877839

  10. Optimization of abdominal fat quantification on CT imaging through use of standardized anatomic space: A novel approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Yubing; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Torigian, Drew A.

    Purpose: The quantification of body fat plays an important role in the study of numerous diseases. It is common current practice to use the fat area at a single abdominal computed tomography (CT) slice as a marker of the body fat content in studying various disease processes. This paper sets out to answer three questions related to this issue which have not been addressed in the literature. At what single anatomic slice location do the areas of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) estimated from the slice correlate maximally with the corresponding fat volume measures? How doesmore » one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? Are there combinations of multiple slices (not necessarily contiguous) whose area sum correlates better with volume than does single slice area with volume? Methods: The authors propose a novel strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. The authors then study the volume-to-area correlations and determine where they become maximal. To address the third issue, the authors carry out similar correlation studies by utilizing two and three slices for calculating area sum. Results: Based on 50 abdominal CT data sets, the proposed mapping achieves significantly improved consistency of anatomic localization compared to current practice. Maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized currently for single slice area estimation as a marker. Conclusions: The maximum area-to-volume correlation achieved is quite high, suggesting that it may be reasonable to estimate body fat by measuring the area of fat from a single anatomic slice at the site of maximum correlation and use this as a marker. The site of maximum correlation is not at L4-L5 as commonly assumed, but is more superiorly located at T12-L1 for SAT and at L3-L4 for VAT. Furthermore, the optimal anatomic locations for SAT and VAT estimation are not the same, contrary to common assumption. The proposed standardized space mapping achieves high consistency of anatomic localization by accurately managing nonlinearities in the relationships among landmarks. Multiple slices achieve greater improvement in correlation for VAT than for SAT. The optimal locations in the case of multiple slices are not contiguous.« less

  11. A real time QRS detection using delay-coordinate mapping for the microcontroller implementation.

    PubMed

    Lee, Jeong-Whan; Kim, Kyeong-Seop; Lee, Bongsoo; Lee, Byungchae; Lee, Myoung-Ho

    2002-01-01

    In this article, we propose a new algorithm using the characteristics of reconstructed phase portraits by delay-coordinate mapping utilizing lag rotundity for a real-time detection of QRS complexes in ECG signals. In reconstructing phase portrait the mapping parameters, time delay, and mapping dimension play important roles in shaping of portraits drawn in a new dimensional space. Experimentally, the optimal mapping time delay for detection of QRS complexes turned out to be 20 ms. To explore the meaning of this time delay and the proper mapping dimension, we applied a fill factor, mutual information, and autocorrelation function algorithm that were generally used to analyze the chaotic characteristics of sampled signals. From these results, we could find the fact that the performance of our proposed algorithms relied mainly on the geometrical property such as an area of the reconstructed phase portrait. For the real application, we applied our algorithm for designing a small cardiac event recorder. This system was to record patients' ECG and R-R intervals for 1 h to investigate HRV characteristics of the patients who had vasovagal syncope symptom and for the evaluation, we implemented our algorithm in C language and applied to MIT/BIH arrhythmia database of 48 subjects. Our proposed algorithm achieved a 99.58% detection rate of QRS complexes.

  12. Relationship between environmental management with quality of kampong space room (Case study: RW 3 of Sukun Sub District, Malang City)

    NASA Astrophysics Data System (ADS)

    Wardhani, D. K.; Azmi, D. S.; Purnamasari, W. D.

    2017-06-01

    RW 3 Sukun Malang was one of kampong that won the competition kampong environment and had managed to maintain the preservation of the kampong. Society of RW 3 Sukun undertake various activities to manage the environment by optimizing the use of kampong space. Despite RW 3 Sukun had conducted environmental management activities, there are several locations in the kampong space that less well maintained. The purpose of this research was to determine the relation of environmental management with the quality of kampong space in RW 3 Sukun. This research used qualitative and quantitative research approaches. Quantitative research conducted by using descriptive statistical analysis in assessing the quality of kampong space with weighting, scoring, and overlay maps. Quantitative research was also conducted on the relation analysis of environmental management with the quality of kampong space by using typology analysis and pearson correlation analysis. Qualitative research conducted on the analysis of environmental management and the relation analysis of environmental management with the quality of kampong space. Result of this research indicates that environmental management in RW 3 Sukun have relation with the quality of kampong space.

  13. Integrated Modeling Activities for the James Webb Space Telescope: Structural-Thermal-Optical Analysis

    NASA Technical Reports Server (NTRS)

    Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.

    2004-01-01

    The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.

  14. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  15. Using Satellite Images for Wireless Network Planing in Baku City

    NASA Astrophysics Data System (ADS)

    Gojamanov, M.; Ismayilov, J.

    2013-04-01

    It is a well known fact that the Information-Telecommunication and Space research technologies are the fields getting much more benefits from the achievements of the scientific and technical progress. In many cases, these areas supporting each other have improved the conditions for their further development. For instance, the intensive development in the field of the mobile communication has caused the rapid progress of the Space research technologies and vice versa.Today it is impossible to solve one of the most important tasks of the mobile communication as Radio Frecance planning without the 2D and 3D digital maps. The compiling of such maps is much more efficient by means of the space images. Because the quality of the space images has been improved and developed, especially at the both spectral and spatial resolution points. It has been possible to to use 8 Band images with the spatial resolution of 50 sm. At present, in relation to the function 3G of mobile communications one of the main issues facing mobile operator companies is a high-precision 3D digital maps. It should be noted that the number of mobile phone users in the Republic of Azerbaijan went forward other Community of Independent States Countries. Of course, using of aerial images for 3D mapping would be optimal. However, depending on a number of technical and administrative problems aerial photography cannot be used. Therefore, the experience of many countries shows that it will be more effective to use the space images with the higher resolution for these issues. Concerning the fact that the mobile communication within the city of Baku has included 3G function there were ordered stereo images wih the spatial resolution of 50 cm for the 150 sq.km territory occupying the central part of the city in order to compile 3D digital maps. The images collected from the WorldView-2 satellite are 4-Band Bundle(Pan+MS1) stereo images. Such kind of imagery enable to automatically classificate some required clutter classes.Meanwhile, there were created 12 GPS points in the territory and there have been held some appropriate observations in these points for the geodesic reference of the space images in the territory. Moreover, it would like to mention that there have been constructed 37 permanently acting GPS stations in the territory of Azerbaijan at present. It significantly facilitates the process of the geodesic reference of the space images in order to accomplish such kind of mentioned projects. The processing of the collected space images was accomplished by means of Erdas LPS 10 program. In the first stage there was created the main component of the 3D maps- Digital Elevevation Model. In this model the following clutter classes are presented: Open; Open areas in urban; Airport, Sea, Inland water; Forest; Parks in urban; Semi Open Area; Open Wet Area; Urban/Urban Mean; Dense urban, Villages, Industrial/Commercial, Residential/Suburban; Dense residential/Suburban; Block of BUILDINGS; Dense Urban High; Buildings, Urban Mixed, Mixed dense urban

  16. Space charge effects and aberrations on electron pulse compression in a spherical electrostatic capacitor.

    PubMed

    Yu, Lei; Li, Haibo; Wan, Weishi; Wei, Zheng; Grzelakowski, Krzysztof P; Tromp, Rudolf M; Tang, Wen-Xin

    2017-12-01

    The effects of space charge, aberrations and relativity on temporal compression are investigated for a compact spherical electrostatic capacitor (α-SDA). By employing the three-dimensional (3D) field simulation and the 3D space charge model based on numerical General Particle Tracer and SIMION, we map the compression efficiency for a wide range of initial beam size and single-pulse electron number and determine the optimum conditions of electron pulses for the most effective compression. The results demonstrate that both space charge effects and aberrations prevent the compression of electron pulses into the sub-ps region if the electron number and the beam size are not properly optimized. Our results suggest that α-SDA is an effective compression approach for electron pulses under the optimum conditions. It may serve as a potential key component in designing future time-resolved electron sources for electron diffraction and spectroscopy experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. WE-AB-209-10: Optimizing the Delivery of Sequential Fluence Maps for Efficient VMAT Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, D; Balvert, M

    2016-06-15

    Purpose: To develop an optimization model and solution approach for computing MLC leaf trajectories and dose rates for high quality matching of a set of optimized fluence maps to be delivered sequentially around a patient in a VMAT treatment. Methods: We formulate the fluence map matching problem as a nonlinear optimization problem where time is discretized but dose rates and leaf positions are continuous variables. For a given allotted time, which is allocated across the fluence maps based on the complexity of each fluence map, the optimization problem searches for the best leaf trajectories and dose rates such that themore » original fluence maps are closely recreated. Constraints include maximum leaf speed, maximum dose rate, and leaf collision avoidance, as well as the constraint that the ending leaf positions for one map are the starting leaf positions for the next map. The resulting model is non-convex but smooth, and therefore we solve it by local searches from a variety of starting positions. We improve solution time by a custom decomposition approach which allows us to decouple the rows of the fluence maps and solve each leaf pair individually. This decomposition also makes the problem easily parallelized. Results: We demonstrate method on a prostate case and a head-and-neck case and show that one can recreate fluence maps to high degree of fidelity in modest total delivery time (minutes). Conclusion: We present a VMAT sequencing method that reproduces optimal fluence maps by searching over a vast number of possible leaf trajectories. By varying the total allotted time given, this approach is the first of its kind to allow users to produce VMAT solutions that span the range of wide-field coarse VMAT deliveries to narrow-field high-MU sliding window-like approaches.« less

  18. Prediction of RNA secondary structures: from theory to models and real molecules

    NASA Astrophysics Data System (ADS)

    Schuster, Peter

    2006-05-01

    RNA secondary structures are derived from RNA sequences, which are strings built form the natural four letter nucleotide alphabet, {AUGC}. These coarse-grained structures, in turn, are tantamount to constrained strings over a three letter alphabet. Hence, the secondary structures are discrete objects and the number of sequences always exceeds the number of structures. The sequences built from two letter alphabets form perfect structures when the nucleotides can form a base pair, as is the case with {GC} or {AU}, but the relation between the sequences and structures differs strongly from the four letter alphabet. A comprehensive theory of RNA structure is presented, which is based on the concepts of sequence space and shape space, being a space of structures. It sets the stage for modelling processes in ensembles of RNA molecules like evolutionary optimization or kinetic folding as dynamical phenomena guided by mappings between the two spaces. The number of minimum free energy (mfe) structures is always smaller than the number of sequences, even for two letter alphabets. Folding of RNA molecules into mfe energy structures constitutes a non-invertible mapping from sequence space onto shape space. The preimage of a structure in sequence space is defined as its neutral network. Similarly the set of suboptimal structures is the preimage of a sequence in shape space. This set represents the conformation space of a given sequence. The evolutionary optimization of structures in populations is a process taking place in sequence space, whereas kinetic folding occurs in molecular ensembles that optimize free energy in conformation space. Efficient folding algorithms based on dynamic programming are available for the prediction of secondary structures for given sequences. The inverse problem, the computation of sequences for predefined structures, is an important tool for the design of RNA molecules with tailored properties. Simultaneous folding or cofolding of two or more RNA molecules can be modelled readily at the secondary structure level and allows prediction of the most stable (mfe) conformations of complexes together with suboptimal states. Cofolding algorithms are important tools for efficient and highly specific primer design in the polymerase chain reaction (PCR) and help to explain the mechanisms of small interference RNA (si-RNA) molecules in gene regulation. The evolutionary optimization of RNA structures is illustrated by the search for a target structure and mimics aptamer selection in evolutionary biotechnology. It occurs typically in steps consisting of short adaptive phases interrupted by long epochs of little or no obvious progress in optimization. During these quasi-stationary epochs the populations are essentially confined to neutral networks where they search for sequences that allow a continuation of the adaptive process. Modelling RNA evolution as a simultaneous process in sequence and shape space provides answers to questions of the optimal population size and mutation rates. Kinetic folding is a stochastic process in conformation space. Exact solutions are derived by direct simulation in the form of trajectory sampling or by solving the master equation. The exact solutions can be approximated straightforwardly by Arrhenius kinetics on barrier trees, which represent simplified versions of conformational energy landscapes. The existence of at least one sequence forming any arbitrarily chosen pair of structures is granted by the intersection theorem. Folding kinetics is the key to understanding and designing multistable RNA molecules or RNA switches. These RNAs form two or more long lived conformations, and conformational changes occur either spontaneously or are induced through binding of small molecules or other biopolymers. RNA switches are found in nature where they act as elements in genetic and metabolic regulation. The reliability of RNA secondary structure prediction is limited by the accuracy with which the empirical parameters can be determined and by principal deficiencies, for example by the lack of energy contributions resulting from tertiary interactions. In addition, native structures may be determined by folding kinetics rather than by thermodynamics. We address the first problem by considering base pair probabilities or base pairing entropies, which are derived from the partition function of conformations. A high base pair probability corresponding to a low pairing entropy is taken as an indicator of a high reliability of prediction. Pseudoknots are discussed as an example of a tertiary interaction that is highly important for RNA function. Moreover, pseudoknot formation is readily incorporated into structure prediction algorithms. Some examples of experimental data on RNA secondary structures that are readily explained using the landscape concept are presented. They deal with (i) properties of RNA molecules with random sequences, (ii) RNA molecules from restricted alphabets, (iii) existence of neutral networks, (iv) shape space covering, (v) riboswitches and (vi) evolution of non-coding RNAs as an example of evolution restricted to neutral networks.

  19. A Method of Surrogate Model Construction which Leverages Lower-Fidelity Information using Space Mapping Techniques

    DTIC Science & Technology

    2014-03-27

    fidelity. This pairing is accomplished through the use of a space mapping technique, which is a process where the design space of a lower fidelity model...is aligned a higher fidelity model. The intent of applying space mapping techniques to the field of surrogate construction is to leverage the

  20. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  1. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  2. Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.

    PubMed

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2008-01-01

    In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.

  3. Methodology and method and apparatus for signaling with capacity optimized constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2011-01-01

    Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.

  4. A space radiation transport method development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2004-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.

  5. A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation

    NASA Astrophysics Data System (ADS)

    Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava

    2015-12-01

    In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.

  6. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  7. Improved sliced velocity map imaging apparatus optimized for H photofragments.

    PubMed

    Ryazanov, Mikhail; Reisler, Hanna

    2013-04-14

    Time-sliced velocity map imaging (SVMI), a high-resolution method for measuring kinetic energy distributions of products in scattering and photodissociation reactions, is challenging to implement for atomic hydrogen products. We describe an ion optics design aimed at achieving SVMI of H fragments in a broad range of kinetic energies (KE), from a fraction of an electronvolt to a few electronvolts. In order to enable consistently thin slicing for any imaged KE range, an additional electrostatic lens is introduced in the drift region for radial magnification control without affecting temporal stretching of the ion cloud. Time slices of ∼5 ns out of a cloud stretched to ⩾50 ns are used. An accelerator region with variable dimensions (using multiple electrodes) is employed for better optimization of radial and temporal space focusing characteristics at each magnification level. The implemented system was successfully tested by recording images of H fragments from the photodissociation of HBr, H2S, and the CH2OH radical, with kinetic energies ranging from <0.4 eV to >3 eV. It demonstrated KE resolution ≲1%-2%, similar to that obtained in traditional velocity map imaging followed by reconstruction, and to KE resolution achieved previously in SVMI of heavier products. We expect it to perform just as well up to at least 6 eV of kinetic energy. The tests showed that numerical simulations of the electric fields and ion trajectories in the system, used for optimization of the design and operating parameters, provide an accurate and reliable description of all aspects of system performance. This offers the advantage of selecting the best operating conditions in each measurement without the need for additional calibration experiments.

  8. Dynamic trajectory-based couch motion for improvement of radiation therapy trajectories in cranial SRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, R. Lee; Thomas, Christopher G., E-mail: Chris.Thomas@cdha.nshealth.ca; Department of Medical Physics, Nova Scotia Cancer Centre, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia B3H 1V7

    2015-05-15

    Purpose: To investigate potential improvement in external beam stereotactic radiation therapy plan quality for cranial cases using an optimized dynamic gantry and patient support couch motion trajectory, which could minimize exposure to sensitive healthy tissue. Methods: Anonymized patient anatomy and treatment plans of cranial cancer patients were used to quantify the geometric overlap between planning target volumes and organs-at-risk (OARs) based on their two-dimensional projection from source to a plane at isocenter as a function of gantry and couch angle. Published dose constraints were then used as weighting factors for the OARs to generate a map of couch-gantry coordinate space,more » indicating degree of overlap at each point in space. A couch-gantry collision space was generated by direct measurement on a linear accelerator and couch using an anthropomorphic solid-water phantom. A dynamic, fully customizable algorithm was written to generate a navigable ideal trajectory for the patient specific couch-gantry space. The advanced algorithm can be used to balance the implementation of absolute minimum values of overlap with the clinical practicality of large-scale couch motion and delivery time. Optimized cranial cancer treatment trajectories were compared to conventional treatment trajectories. Results: Comparison of optimized treatment trajectories with conventional treatment trajectories indicated an average decrease in mean dose to the OARs of 19% and an average decrease in maximum dose to the OARs of 12%. Degradation was seen for homogeneity index (6.14% ± 0.67%–5.48% ± 0.76%) and conformation number (0.82 ± 0.02–0.79 ± 0.02), but neither was statistically significant. Removal of OAR constraints from volumetric modulated arc therapy optimization reveals that reduction in dose to OARs is almost exclusively due to the optimized trajectory and not the OAR constraints. Conclusions: The authors’ study indicated that simultaneous couch and gantry motion during radiation therapy to minimize the geometrical overlap in the beams-eye-view of target volumes and the organs-at-risk can have an appreciable dose reduction to organs-at-risk.« less

  9. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    PubMed

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  10. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  11. Coherence area profiling in multi-spatial-mode squeezed states

    DOE PAGES

    Lawrie, Benjamin J.; Pooser, Raphael C.; Otterstrom, Nils T.

    2015-09-12

    The presence of multiple bipartite entangled modes in squeezed states generated by four-wave mixing enables ultra-trace sensing, imaging, and metrology applications that are impossible to achieve with single-spatial-mode squeezed states. For Gaussian seed beams, the spatial distribution of these bipartite entangled modes, or coherence areas, across each beam is largely dependent on the spatial modes present in the pump beam, but it has proven difficult to map the distribution of these coherence areas in frequency and space. We demonstrate an accessible method to map the distribution of the coherence areas within these twin beams. In addition, we also show thatmore » the pump shape can impart different noise properties to each coherence area, and that it is possible to select and detect coherence areas with optimal squeezing with this approach.« less

  12. TH-CD-209-05: Impact of Spot Size and Spacing On the Quality of Robustly-Optimized Intensity-Modulated Proton Therapy Plans for Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Ding, X; Hu, Y

    Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). Themore » root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan robustness and the impact of interplay effect than spot size alone. This research was supported by the National Cancer Institute Career Developmental Award K25CA168984, by the Fraternal Order of Eagles Cancer Research Fund Career Development Award, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, by Mayo Arizona State University Seed Grant, and by The Kemper Marley Foundation.« less

  13. Empty tracks optimization based on Z-Map model

    NASA Astrophysics Data System (ADS)

    Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao

    2017-12-01

    For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.

  14. Mapping genes underlying ethnic differences in disease risk by linkage disequilibrium in recently admixed populations.

    PubMed Central

    McKeigue, P M

    1997-01-01

    Where recent admixture has occurred between two populations that have different disease rates for genetic reasons, family-based association studies can be used to map the genes underlying these differences, if the ancestry of the alleles at each locus examined can be assigned to one of the two founding populations. This article explores the statistical power and design requirements of this approach. Markers suitable for assigning the ancestry of genomic regions could be defined by grouping alleles at closely spaced microsatellite loci into haplotypes, or generated by representational difference analysis. For a given relative risk between populations, the sample size required to detect a disease locus that accounts for this relative risk by linkage-disequilibrium mapping in an admixed population is not critically dependent on assumptions about genotype penetrances or allele frequencies. Using the transmission-disequilibrium test to search the genome for a locus that accounts for a relative risk of between 2 and 3 in a high-risk population, compared with a low-risk population, generally requires between 150 and 800 case-parent pairs of mixed descent. The optimal strategy is to conduct an initial study using markers spaced at < or = 10 cM with cases from the second and third generations of mixed descent, and then to map the disease loci more accurately in a subsequent study of a population with a longer history of admixture. This approach has greater statistical power than allele-sharing designs and has obvious applications to the genetics of hypertension, non-insulin-dependent diabetes, and obesity. PMID:8981962

  15. Virtual environment navigation with look-around mode to explore new real spaces by people who are blind.

    PubMed

    Lahav, Orly; Gedalevitz, Hadas; Battersby, Steven; Brown, David; Evett, Lindsay; Merritt, Patrick

    2018-05-01

    This paper examines the ability of people who are blind to construct a mental map and perform orientation tasks in real space by using Nintendo Wii technologies to explore virtual environments. The participant explores new spaces through haptic and auditory feedback triggered by pointing or walking in the virtual environments and later constructs a mental map, which can be used to navigate in real space. The study included 10 participants who were congenitally or adventitiously blind, divided into experimental and control groups. The research was implemented by using virtual environments exploration and orientation tasks in real spaces, using both qualitative and quantitative methods in its methodology. The results show that the mode of exploration afforded to the experimental group is radically new in orientation and mobility training; as a result 60% of the experimental participants constructed mental maps that were based on map model, compared with only 30% of the control group participants. Using technology that enabled them to explore and to collect spatial information in a way that does not exist in real space influenced the ability of the experimental group to construct a mental map based on the map model. Implications for rehabilitation The virtual cane system for the first time enables people who are blind to explore and collect spatial information via the look-around mode in addition to the walk-around mode. People who are blind prefer to use look-around mode to explore new spaces, as opposed to the walking mode. Although the look-around mode requires users to establish a complex collecting and processing procedure for the spatial data, people who are blind using this mode are able to construct a mental map as a map model. For people who are blind (as for the sighted) construction of a mental map based on map model offers more flexibility in choosing a walking path in a real space, accounting for changes that occur in the space.

  16. Optimized efficient liver T1ρ mapping using limited spin lock times

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Zhao, Feng; Griffith, James F.; Chan, Queenie; Wang, Yi-Xiang J.

    2012-03-01

    T1ρ relaxation has recently been found to be sensitive to liver fibrosis and has potential to be used for early detection of liver fibrosis and grading. Liver T1ρ imaging and accurate mapping are challenging because of the long scan time, respiration motion and high specific absorption rate. Reduction and optimization of spin lock times (TSLs) are an efficient way to reduce scan time and radiofrequency energy deposition of T1ρ imaging, but maintain the near-optimal precision of T1ρ mapping. This work analyzes the precision in T1ρ estimation with limited, in particular two, spin lock times, and explores the feasibility of using two specific operator-selected TSLs for efficient and accurate liver T1ρ mapping. Two optimized TSLs were derived by theoretical analysis and numerical simulations first, and tested experimentally by in vivo rat liver T1ρ imaging at 3 T. The simulation showed that the TSLs of 1 and 50 ms gave optimal T1ρ estimation in a range of 10-100 ms. In the experiment, no significant statistical difference was found between the T1ρ maps generated using the optimized two-TSL combination and the maps generated using the six TSLs of [1, 10, 20, 30, 40, 50] ms according to one-way ANOVA analysis (p = 0.1364 for liver and p = 0.8708 for muscle).

  17. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    PubMed

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  18. Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.

    PubMed

    Demartines, P; Herault, J

    1997-01-01

    We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

  19. Exploration of a Capability-Focused Aerospace System of Systems Architecture Alternative with Bilayer Design Space, Based on RST-SOM Algorithmic Methods

    PubMed Central

    Li, Zhifei; Qin, Dongliang

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572

  20. Exploration of a capability-focused aerospace system of systems architecture alternative with bilayer design space, based on RST-SOM algorithmic methods.

    PubMed

    Li, Zhifei; Qin, Dongliang; Yang, Feng

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.

  1. Energy minimization on manifolds for docking flexible molecules

    PubMed Central

    Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima

    2015-01-01

    In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722

  2. Fluctuating Charge-Order in Optimally Doped Bi- 2212 Revealed by Momentum-resolved Electron Energy Loss Spectroscopy

    NASA Astrophysics Data System (ADS)

    Husain, Ali; Vig, Sean; Kogar, Anshul; Mishra, Vivek; Rak, Melinda; Mitrano, Matteo; Johnson, Peter; Gu, Genda; Fradkin, Eduardo; Norman, Michael; Abbamonte, Peter

    Static charge order is a ubiquitous feature of the underdoped cuprates. However, at optimal doping, charge-order has been thought to be completely suppressed, suggesting an interplay between the charge-ordering and superconducting order parameters. Using Momentum-resolved Electron Energy Loss Spectroscopy (M-EELS) we show the existence of diffuse fluctuating charge-order in the optimally doped cuprate Bi2Sr2CaCu2O8+δ (Bi-2212) at low-temperature. We present full momentum-space maps of both elastic and inelastic scattering at room temperature and below the superconducting transition with 4meV resolution. We show that the ``rods'' of diffuse scattering indicate nematic-like fluctuations, and the energy width defines a fluctuation timescale of 160 fs. We discuss the implications of fluctuating charge-order on the dynamics at optimal doping. This work was supported by the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF-4542. An early prototype of the M-EELS instrument was supported by the DOE Center for Emergent Superconductivity under Award No. DE-AC02-98CH10886.

  3. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  4. Assessing efficiency of spatial sampling using combined coverage analysis in geographical and feature spaces

    NASA Astrophysics Data System (ADS)

    Hengl, Tomislav

    2015-04-01

    Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.

  5. Optimal mapping of irregular finite element domains to parallel processors

    NASA Technical Reports Server (NTRS)

    Flower, J.; Otto, S.; Salama, M.

    1987-01-01

    Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.

  6. A new automatic synthetic aperture radar-based flood mapping application hosted on the European Space Agency's Grid Processing of Demand Fast Access to Imagery environment

    NASA Astrophysics Data System (ADS)

    Matgen, Patrick; Giustarini, Laura; Hostache, Renaud

    2012-10-01

    This paper introduces an automatic flood mapping application that is hosted on the Grid Processing on Demand (GPOD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver operationally flooded areas using both recent and historical acquisitions of SAR data. Having as a short-term target the flooding-related exploitation of data generated by the upcoming ESA SENTINEL-1 SAR mission, the flood mapping application consists of two building blocks: i) a set of query tools for selecting the "crisis image" and the optimal corresponding "reference image" from the G-POD archive and ii) an algorithm for extracting flooded areas via change detection using the previously selected "crisis image" and "reference image". Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate reference image. Potential users will also be able to apply the implemented flood delineation algorithm. The latter combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. Both algorithms are computationally efficient and operate with minimum data requirements. The case study of the high magnitude flooding event that occurred in July 2007 on the Severn River, UK, and that was observed with a moderateresolution SAR sensor as well as airborne photography highlights the performance of the proposed online application. The flood mapping application on G-POD can be used sporadically, i.e. whenever a major flood event occurs and there is a demand for SAR-based flood extent maps. In the long term, a potential extension of the application could consist in systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis.

  7. An optimization method of VON mapping for energy efficiency and routing in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Liu, Huanlin; Xiong, Cuilian; Chen, Yong; Li, Changping; Chen, Derun

    2018-03-01

    To improve resources utilization efficiency, network virtualization in elastic optical networks has been developed by sharing the same physical network for difference users and applications. In the process of virtual nodes mapping, longer paths between physical nodes will consume more spectrum resources and energy. To address the problem, we propose a virtual optical network mapping algorithm called genetic multi-objective optimize virtual optical network mapping algorithm (GM-OVONM-AL), which jointly optimizes the energy consumption and spectrum resources consumption in the process of virtual optical network mapping. Firstly, a vector function is proposed to balance the energy consumption and spectrum resources by optimizing population classification and crowding distance sorting. Then, an adaptive crossover operator based on hierarchical comparison is proposed to improve search ability and convergence speed. In addition, the principle of the survival of the fittest is introduced to select better individual according to the relationship of domination rank. Compared with the spectrum consecutiveness-opaque virtual optical network mapping-algorithm and baseline-opaque virtual optical network mapping algorithm, simulation results show the proposed GM-OVONM-AL can achieve the lowest bandwidth blocking probability and save the energy consumption.

  8. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides.

    PubMed

    Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U

    2015-01-01

    Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.

  9. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  10. Modeling Leadership Styles in Human-Robot Team Dynamics

    NASA Technical Reports Server (NTRS)

    Cruz, Gerardo E.

    2005-01-01

    The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.

  11. Visualizing protein partnerships in living cells and organisms.

    PubMed

    Lowder, Melissa A; Appelbaum, Jacob S; Hobert, Elissa M; Schepartz, Alanna

    2011-12-01

    In recent years, scientists have expanded their focus from cataloging genes to characterizing the multiple states of their translated products. One anticipated result is a dynamic map of the protein association networks and activities that occur within the cellular environment. While in vitro-derived network maps can illustrate which of a multitude of possible protein-protein associations could exist, they supply a falsely static picture lacking the subtleties of subcellular location (where) or cellular state (when). Generating protein association network maps that are informed by both subcellular location and cell state requires novel approaches that accurately characterize the state of protein associations in living cells and provide precise spatiotemporal resolution. In this review, we highlight recent advances in visualizing protein associations and networks under increasingly native conditions. These advances include second generation protein complementation assays (PCAs), chemical and photo-crosslinking techniques, and proximity-induced ligation approaches. The advances described focus on background reduction, signal optimization, rapid and reversible reporter assembly, decreased cytotoxicity, and minimal functional perturbation. Key breakthroughs have addressed many challenges and should expand the repertoire of tools useful for generating maps of protein interactions resolved in both time and space. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.

    PubMed

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2013-11-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.

  13. Three petabytes or bust: planning science observations for NISAR

    NASA Astrophysics Data System (ADS)

    Doubleday, Joshua R.

    2016-05-01

    The National Aeronautics and Space Administration (NASA) and the Indian Space Research Organization (ISRO) have formed a joint agency mission, NASA ISRO Synthetic Aperture Radar (NISAR) to fly in the 2020 timeframe, charged with collecting Synthetic Aperture Radar data over nearly all of earth's land and ice, to advance science in ecosystems, solid-earth and cryospheric disciplines with global time-series maps of various phenomenon. Over a three-year mission span, NISAR will collect on the order of 24 Terabits of raw radar data per day. Developing a plan to collect the data necessary for these three primary science disciplines and their sub-disciplines has been challenging in terms of overlapping geographic regions of interest, temporal requirements, competing modes of the radar instrument, and data-volume resources. One of the chief tools in building a plan of observations against these requirements has been a software tool developed at JPL, the Compressed Large-scale Scheduler Planner (CLASP). CLASP intersects the temporo-geometric visibilities of a spaceborne instrument with campaigns of temporospatial maps of scientific interest, in an iterative squeaky-wheel optimization loop. While the overarching strategy for science observations has evolved through the formulation phases of this mission, so has the use of CLASP. We'll show how this problem space and tool has evolved over time, as well as some of the current parameter estimates for NISAR and its overall mission plan.

  14. Optical design and stray light analysis for the JANUS camera of the JUICE space mission

    NASA Astrophysics Data System (ADS)

    Greggio, D.; Magrin, D.; Munari, M.; Zusi, M.; Ragazzoni, R.; Cremonese, G.; Debei, S.; Friso, E.; Della Corte, V.; Palumbo, P.; Hoffmann, H.; Jaumann, R.; Michaelis, H.; Schmitz, N.; Schipani, P.; Lara, L. M.

    2015-09-01

    The JUICE (JUpiter ICy moons Explorer) satellite of the European Space Agency (ESA) is dedicated to the detailed study of Jupiter and its moons. Among the whole instrument suite, JANUS (Jovis, Amorum ac Natorum Undique Scrutator) is the camera system of JUICE designed for imaging at visible wavelengths. It will conduct an in-depth study of Ganymede, Callisto and Europa, and explore most of the Jovian system and Jupiter itself, performing, in the case of Ganymede, a global mapping of the satellite with a resolution of 400 m/px. The optical design chosen to meet the scientific goals of JANUS is a three mirror anastigmatic system in an off-axis configuration. To ensure that the achieved contrast is high enough to observe the features on the surface of the satellites, we also performed a preliminary stray light analysis of the telescope. We provide here a short description of the optical design and we present the procedure adopted to evaluate the stray-light expected during the mapping phase of the surface of Ganymede. We also use the results obtained from the first run of simulations to optimize the baffle design.

  15. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    PubMed

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  16. Virtual optical network mapping and core allocation in elastic optical networks using multi-core fibers

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-11-01

    Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.

  17. Analysis of dynamically stable patterns in a maze-like corridor using the Wasserstein metric.

    PubMed

    Ishiwata, Ryosuke; Kinukawa, Ryota; Sugiyama, Yuki

    2018-04-23

    The two-dimensional optimal velocity (2d-OV) model represents a dissipative system with asymmetric interactions, thus being suitable to reproduce behaviours such as pedestrian dynamics and the collective motion of living organisms. In this study, we found that particles in the 2d-OV model form optimal patterns in a maze-like corridor. Then, we estimated the stability of such patterns using the Wasserstein metric. Furthermore, we mapped these patterns into the Wasserstein metric space and represented them as points in a plane. As a result, we discovered that the stability of the dynamical patterns is strongly affected by the model sensitivity, which controls the motion of each particle. In addition, we verified the existence of two stable macroscopic patterns which were cohesive, stable, and appeared regularly over the time evolution of the model.

  18. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  19. 3D nonrigid registration via optimal mass transport on the GPU.

    PubMed

    Ur Rehman, Tauseef; Haber, Eldad; Pryor, Gallagher; Melonakos, John; Tannenbaum, Allen

    2009-12-01

    In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.

  20. Optimizing Wind Power Generation while Minimizing Wildlife Impacts in an Urban Area

    PubMed Central

    Bohrer, Gil; Zhu, Kunpeng; Jones, Robert L.; Curtis, Peter S.

    2013-01-01

    The location of a wind turbine is critical to its power output, which is strongly affected by the local wind field. Turbine operators typically seek locations with the best wind at the lowest level above ground since turbine height affects installation costs. In many urban applications, such as small-scale turbines owned by local communities or organizations, turbine placement is challenging because of limited available space and because the turbine often must be added without removing existing infrastructure, including buildings and trees. The need to minimize turbine hazard to wildlife compounds the challenge. We used an exclusion zone approach for turbine-placement optimization that incorporates spatially detailed maps of wind distribution and wildlife densities with power output predictions for the Ohio State University campus. We processed public GIS records and airborne lidar point-cloud data to develop a 3D map of all campus buildings and trees. High resolution large-eddy simulations and long-term wind climatology were combined to provide land-surface-affected 3D wind fields and the corresponding wind-power generation potential. This power prediction map was then combined with bird survey data. Our assessment predicts that exclusion of areas where bird numbers are highest will have modest effects on the availability of locations for power generation. The exclusion zone approach allows the incorporation of wildlife hazard in wind turbine siting and power output considerations in complex urban environments even when the quantitative interaction between wildlife behavior and turbine activity is unknown. PMID:23409117

  1. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  2. Average variograms to guide soil sampling

    NASA Astrophysics Data System (ADS)

    Kerry, R.; Oliver, M. A.

    2004-10-01

    To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.

  3. Optimizing wind power generation while minimizing wildlife impacts in an urban area.

    PubMed

    Bohrer, Gil; Zhu, Kunpeng; Jones, Robert L; Curtis, Peter S

    2013-01-01

    The location of a wind turbine is critical to its power output, which is strongly affected by the local wind field. Turbine operators typically seek locations with the best wind at the lowest level above ground since turbine height affects installation costs. In many urban applications, such as small-scale turbines owned by local communities or organizations, turbine placement is challenging because of limited available space and because the turbine often must be added without removing existing infrastructure, including buildings and trees. The need to minimize turbine hazard to wildlife compounds the challenge. We used an exclusion zone approach for turbine-placement optimization that incorporates spatially detailed maps of wind distribution and wildlife densities with power output predictions for the Ohio State University campus. We processed public GIS records and airborne lidar point-cloud data to develop a 3D map of all campus buildings and trees. High resolution large-eddy simulations and long-term wind climatology were combined to provide land-surface-affected 3D wind fields and the corresponding wind-power generation potential. This power prediction map was then combined with bird survey data. Our assessment predicts that exclusion of areas where bird numbers are highest will have modest effects on the availability of locations for power generation. The exclusion zone approach allows the incorporation of wildlife hazard in wind turbine siting and power output considerations in complex urban environments even when the quantitative interaction between wildlife behavior and turbine activity is unknown.

  4. Lead optimization in the nondrug-like space.

    PubMed

    Zhao, Hongyu

    2011-02-01

    Drug-like space might be more densely populated with orally available compounds than the remaining chemical space, but lead optimization can still occur outside this space. Oral drug space is more dynamic than the relatively static drug-like space. As new targets emerge and optimization tools advance the oral drug space might expand. Lead optimization protocols are becoming more complex with greater optimization needs to be satisfied, which consequently could change the role of drug-likeness in the process. Whereas drug-like space should usually be explored preferentially, it can be easier to find oral drugs for certain targets in the nondrug-like space. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Reciprocal space mapping and single-crystal scattering rods.

    PubMed

    Smilgies, Detlef M; Blasini, Daniel R; Hotta, Shu; Yanagi, Hisao

    2005-11-01

    Reciprocal space mapping using a linear gas detector in combination with a matching Soller collimator has been applied to map scattering rods of well oriented organic microcrystals grown on a solid surface. Formulae are provided to correct image distortions in angular space and to determine the required oscillation range, in order to measure properly integrated scattering intensities.

  6. Search space mapping: getting a picture of coherent laser control.

    PubMed

    Shane, Janelle C; Lozovoy, Vadim V; Dantus, Marcos

    2006-10-12

    Search space mapping is a method for quickly visualizing the experimental parameters that can affect the outcome of a coherent control experiment. We demonstrate experimental search space mapping for the selective fragmentation and ionization of para-nitrotoluene and show how this method allows us to gather information about the dominant trends behind our achieved control.

  7. Mapping Cultural Boundaries in Schools and Communities: Redefining Spaces through Organizing

    ERIC Educational Resources Information Center

    Wood, Gerald K.; Lemley, Christine K.

    2015-01-01

    For this study, the authors look specifically at cultural maps that the youth created in Student Involvement Day (SID), a program committed to youth empowerment. In these maps, youth identified spaces in their schools and communities that are open and inclusive of their cultures or spaces where their cultures are excluded. Drawing on critical…

  8. Mapping Inner Space: Learning and Teaching Visual Mapping. Second Edition.

    ERIC Educational Resources Information Center

    Margulies, Nancy

    More than 10 years ago, when "Mapping Inner Space" was first published, a few teachers were using this creative technique and teaching it to their students. Today mapping is widely used in schools, universities, and the corporate world, as well. This second edition of the book explores a variety of mapping styles and also takes a fresh look at the…

  9. Differences in Spatial Knowledge of Individuals with Blindness When Using Audiotactile Maps, Using Tactile Maps, and Walking

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Barouti, Marialena; Koustriava, Eleni

    2018-01-01

    To examine how individuals with visual impairments understand space and the way they develop cognitive maps, we studied the differences in cognitive maps resulting from different methods and tools for spatial coding in large geographical spaces. We examined the ability of 21 blind individuals to create cognitive maps of routes in unfamiliar areas…

  10. Cultural Mapping as a Social Practice: A Response to "Mapping the Cultural Boundaries in Schools and Communities: Redefining Spaces Through Organizing"

    ERIC Educational Resources Information Center

    Vadeboncoeur, Jennifer A.; Hanif-Shahban, Shenaz A.

    2015-01-01

    Inspired by Gerald Wood and Elizabeth Lemley's (2015) article entitled "Mapping the Cultural Boundaries in Schools and Communities: Redefining Spaces Through Organizing," this response inquires further into cultural mapping as a social practice. From our perspective, cultural mapping has potential to contribute to place making, as well…

  11. Optimal mapping of neural-network learning on message-passing multicomputers

    NASA Technical Reports Server (NTRS)

    Chu, Lon-Chan; Wah, Benjamin W.

    1992-01-01

    A minimization of learning-algorithm completion time is sought in the present optimal-mapping study of the learning process in multilayer feed-forward artificial neural networks (ANNs) for message-passing multicomputers. A novel approximation algorithm for mappings of this kind is derived from observations of the dominance of a parallel ANN algorithm over its communication time. Attention is given to both static and dynamic mapping schemes for systems with static and dynamic background workloads, as well as to experimental results obtained for simulated mappings on multicomputers with dynamic background workloads.

  12. Accelerating Commercial Remote Sensing

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Through the Visiting Investigator Program (VIP) at Stennis Space Center, Community Coffee was able to use satellites to forecast coffee crops in Guatemala. Using satellite imagery, the company can produce detailed maps that separate coffee cropland from wild vegetation and show information on the health of specific crops. The data can control coffee prices and eventually may be used to optimize application of fertilizers, pesticides and irrigation. This would result in maximal crop yields, minimal pollution and lower production costs. VIP is a mechanism involving NASA funding designed to accelerate the growth of commercial remote sensing by promoting general awareness and basic training in the technology.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad

    With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less

  14. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  15. Rhesus monkeys (Macaca mulatta) map number onto space

    PubMed Central

    Drucker, Caroline B.; Brannon, Elizabeth M.

    2014-01-01

    Humans map number onto space. However, the origins of this association, and particularly the degree to which it depends upon cultural experience, are not fully understood. Here we provide the first demonstration of a number-space mapping in a non-human primate. We trained four adult male rhesus macaques (Macaca mulatta) to select the fourth position from the bottom of a five-element vertical array. Monkeys maintained a preference to choose the fourth position through changes in the appearance, location, and spacing of the vertical array. We next asked whether monkeys show a spatially-oriented number mapping by testing their responses to the same five-element stimulus array rotated ninety degrees into a horizontal line. In these horizontal probe trials, monkeys preferentially selected the fourth position from the left, but not the fourth position from the right. Our results indicate that rhesus macaques map number onto space, suggesting that the association between number and space in human cognition is not purely a result of cultural experience and instead has deep evolutionary roots. PMID:24762923

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics

    Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less

  17. Reproducible, high-throughput synthesis of colloidal nanocrystals for optimization in multidimensional parameter space.

    PubMed

    Chan, Emory M; Xu, Chenxu; Mao, Alvin W; Han, Gang; Owen, Jonathan S; Cohen, Bruce E; Milliron, Delia J

    2010-05-12

    While colloidal nanocrystals hold tremendous potential for both enhancing fundamental understanding of materials scaling and enabling advanced technologies, progress in both realms can be inhibited by the limited reproducibility of traditional synthetic methods and by the difficulty of optimizing syntheses over a large number of synthetic parameters. Here, we describe an automated platform for the reproducible synthesis of colloidal nanocrystals and for the high-throughput optimization of physical properties relevant to emerging applications of nanomaterials. This robotic platform enables precise control over reaction conditions while performing workflows analogous to those of traditional flask syntheses. We demonstrate control over the size, size distribution, kinetics, and concentration of reactions by synthesizing CdSe nanocrystals with 0.2% coefficient of variation in the mean diameters across an array of batch reactors and over multiple runs. Leveraging this precise control along with high-throughput optical and diffraction characterization, we effectively map multidimensional parameter space to tune the size and polydispersity of CdSe nanocrystals, to maximize the photoluminescence efficiency of CdTe nanocrystals, and to control the crystal phase and maximize the upconverted luminescence of lanthanide-doped NaYF(4) nanocrystals. On the basis of these demonstrative examples, we conclude that this automated synthesis approach will be of great utility for the development of diverse colloidal nanomaterials for electronic assemblies, luminescent biological labels, electroluminescent devices, and other emerging applications.

  18. The Information Is In the Maps: Representations & Algorithms for Mapping among Geometric Data

    DTIC Science & Technology

    2015-09-30

    space of all maps is a huge space and an important part of the project has addressed the problem of finding compact representations and encodings...understanding the relationships among its parts, or its connections to other data sets that may share the same or similar structure. Towards this end, we have...for the much smaller spaces of interesting maps within a specific application. The machinery developed here has proven of use across a broad spectrum

  19. Minimal invasive epicardial lead implantation: optimizing cardiac resynchronization with a new mapping device for epicardial lead placement.

    PubMed

    Maessen, J G; Phelps, B; Dekker, A L A J; Dijkman, B

    2004-05-01

    To optimize resynchronization in biventricular pacing with epicardial leads, mapping to determine the best pacing site, is a prerequisite. A port access surgical mapping technique was developed that allowed multiple pace site selection and reproducible lead evaluation and implantation. Pressure-volume loops analysis was used for real time guidance in targeting epicardial lead placement. Even the smallest changes in lead position revealed significantly different functional results. Optimizing the pacing site with this technique allowed functional improvement up to 40% versus random pace site selection.

  20. Mining chemical reactions using neighborhood behavior and condensed graphs of reactions approaches.

    PubMed

    de Luca, Aurélie; Horvath, Dragos; Marcou, Gilles; Solov'ev, Vitaly; Varnek, Alexandre

    2012-09-24

    This work addresses the problem of similarity search and classification of chemical reactions using Neighborhood Behavior (NB) and Condensed Graphs of Reaction (CGR) approaches. The CGR formalism represents chemical reactions as a classical molecular graph with dynamic bonds, enabling descriptor calculations on this graph. Different types of the ISIDA fragment descriptors generated for CGRs in combination with two metrics--Tanimoto and Euclidean--were considered as chemical spaces, to serve for reaction dissimilarity scoring. The NB method has been used to select an optimal combination of descriptors which distinguish different types of chemical reactions in a database containing 8544 reactions of 9 classes. Relevance of NB analysis has been validated in generic (multiclass) similarity search and in clustering with Self-Organizing Maps (SOM). NB-compliant sets of descriptors were shown to display enhanced mapping propensities, allowing the construction of better Self-Organizing Maps and similarity searches (NB and classical similarity search criteria--AUC ROC--correlate at a level of 0.7). The analysis of the SOM clusters proved chemically meaningful CGR substructures representing specific reaction signatures.

  1. Development of optimized segmentation map in dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Yamakawa, Keisuke; Ueki, Hironori

    2012-03-01

    Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.

  2. Design and optimization of color lookup tables on a simplex topology.

    PubMed

    Monga, Vishal; Bala, Raja; Mo, Xuan

    2012-04-01

    An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.

  3. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    NASA Astrophysics Data System (ADS)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  4. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California

    PubMed Central

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R.

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. PMID:28241028

  5. Tier-scalable reconnaissance: the challenge of sensor optimization, sensor deployment, sensor fusion, and sensor interoperability

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; George, Thomas; Tarbell, Mark A.

    2007-04-01

    Robotic reconnaissance operations are called for in extreme environments, not only those such as space, including planetary atmospheres, surfaces, and subsurfaces, but also in potentially hazardous or inaccessible operational areas on Earth, such as mine fields, battlefield environments, enemy occupied territories, terrorist infiltrated environments, or areas that have been exposed to biochemical agents or radiation. Real time reconnaissance enables the identification and characterization of transient events. A fundamentally new mission concept for tier-scalable reconnaissance of operational areas, originated by Fink et al., is aimed at replacing the engineering and safety constrained mission designs of the past. The tier-scalable paradigm integrates multi-tier (orbit atmosphere surface/subsurface) and multi-agent (satellite UAV/blimp surface/subsurface sensing platforms) hierarchical mission architectures, introducing not only mission redundancy and safety, but also enabling and optimizing intelligent, less constrained, and distributed reconnaissance in real time. Given the mass, size, and power constraints faced by such a multi-platform approach, this is an ideal application scenario for a diverse set of MEMS sensors. To support such mission architectures, a high degree of operational autonomy is required. Essential elements of such operational autonomy are: (1) automatic mapping of an operational area from different vantage points (including vehicle health monitoring); (2) automatic feature extraction and target/region-of-interest identification within the mapped operational area; and (3) automatic target prioritization for close-up examination. These requirements imply the optimal deployment of MEMS sensors and sensor platforms, sensor fusion, and sensor interoperability.

  6. Time-space and cognition-space transformations for transportation network analysis based on multidimensional scaling and self-organizing map

    NASA Astrophysics Data System (ADS)

    Hong, Zixuan; Bian, Fuling

    2008-10-01

    Geographic space, time space and cognition space are three fundamental and interrelated spaces in geographic information systems for transportation. However, the cognition space and its relationships to the time space and geographic space are often neglected. This paper studies the relationships of these three spaces in urban transportation system from a new perspective and proposes a novel MDS-SOM transformation method which takes the advantages of the techniques of multidimensional scaling (MDS) and self-organizing map (SOM). The MDS-SOM transformation framework includes three kinds of mapping: the geographic-time transformation, the cognition-time transformation and the time-cognition transformation. The transformations in our research provide a better understanding of the interactions of these three spaces and beneficial knowledge is discovered to help the transportation analysis and decision supports.

  7. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    PubMed Central

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855

  8. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    PubMed

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  9. Hysteresis compensation of the Prandtl-Ishlinskii model for piezoelectric actuators using modified particle swarm optimization with chaotic map.

    PubMed

    Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua

    2017-07-01

    Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.

  10. Monitoring of the Conformational Space of Dipeptides by Generative Topographic Mapping.

    PubMed

    Horvath, Dragos; Marcou, Gilles; Varnek, Alexandre

    2018-01-01

    This work describes a procedure to build generative topographic maps (GTM) as 2D representation of the conformational space (CS) of dipeptides. GTMs with excellent propensities to support highly predictive landscapes of various conformational properties were reported for three dipeptides (AA, KE and KR). CS monitoring via GTMproceeds through the projection of conformer ensembles on the map, producing cumulated responsibility (CR) vectors characteristic of the CS areas covered by the ensemble. Overlap of the CS areas visited by two distinct simulations can be expressed by the Tanimoto coefficient Tc of the associated CRs. This idea was used to monitor the reproducibility of the stochastic evolutionary conformer generation process implemented in S4MPLE. It could be shown that conformers produced by <500 S4MPLE runs reproducibly cover the relevant CS zone at given setup of the driving force field. The propensity of a simulation to visit the native CS zone can thus be quantitatively estimated, as the Tc score with respect to the "native" CR, as defined by the ensemble of dipeptide geometries extracted from PDB proteins. It could be shown that low-energy CS regions were indeed found to fall within the native zone. The Tc overlap score behaved as a smooth function of force field parameters. This opens the perspective of a novel force field parameter tuning procedure, bound to simultaneously optimize the behavior of the in Silico simulations for every possible dipeptide. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Topographic Independent Component Analysis reveals random scrambling of orientation in visual space

    PubMed Central

    Martinez-Garcia, Marina; Martinez, Luis M.

    2017-01-01

    Neurons at primary visual cortex (V1) in humans and other species are edge filters organized in orientation maps. In these maps, neurons with similar orientation preference are clustered together in iso-orientation domains. These maps have two fundamental properties: (1) retinotopy, i.e. correspondence between displacements at the image space and displacements at the cortical surface, and (2) a trade-off between good coverage of the visual field with all orientations and continuity of iso-orientation domains in the cortical space. There is an active debate on the origin of these locally continuous maps. While most of the existing descriptions take purely geometric/mechanistic approaches which disregard the network function, a clear exception to this trend in the literature is the original approach of Hyvärinen and Hoyer based on infomax and Topographic Independent Component Analysis (TICA). Although TICA successfully addresses a number of other properties of V1 simple and complex cells, in this work we question the validity of the orientation maps obtained from TICA. We argue that the maps predicted by TICA can be analyzed in the retinal space, and when doing so, it is apparent that they lack the required continuity and retinotopy. Here we show that in the orientation maps reported in the TICA literature it is easy to find examples of violation of the continuity between similarly tuned mechanisms in the retinal space, which suggest a random scrambling incompatible with the maps in primates. The new experiments in the retinal space presented here confirm this guess: TICA basis vectors actually follow a random salt-and-pepper organization back in the image space. Therefore, the interesting clusters found in the TICA topology cannot be interpreted as the actual cortical orientation maps found in cats, primates or humans. In conclusion, Topographic ICA does not reproduce cortical orientation maps. PMID:28640816

  12. Topographic Independent Component Analysis reveals random scrambling of orientation in visual space.

    PubMed

    Martinez-Garcia, Marina; Martinez, Luis M; Malo, Jesús

    2017-01-01

    Neurons at primary visual cortex (V1) in humans and other species are edge filters organized in orientation maps. In these maps, neurons with similar orientation preference are clustered together in iso-orientation domains. These maps have two fundamental properties: (1) retinotopy, i.e. correspondence between displacements at the image space and displacements at the cortical surface, and (2) a trade-off between good coverage of the visual field with all orientations and continuity of iso-orientation domains in the cortical space. There is an active debate on the origin of these locally continuous maps. While most of the existing descriptions take purely geometric/mechanistic approaches which disregard the network function, a clear exception to this trend in the literature is the original approach of Hyvärinen and Hoyer based on infomax and Topographic Independent Component Analysis (TICA). Although TICA successfully addresses a number of other properties of V1 simple and complex cells, in this work we question the validity of the orientation maps obtained from TICA. We argue that the maps predicted by TICA can be analyzed in the retinal space, and when doing so, it is apparent that they lack the required continuity and retinotopy. Here we show that in the orientation maps reported in the TICA literature it is easy to find examples of violation of the continuity between similarly tuned mechanisms in the retinal space, which suggest a random scrambling incompatible with the maps in primates. The new experiments in the retinal space presented here confirm this guess: TICA basis vectors actually follow a random salt-and-pepper organization back in the image space. Therefore, the interesting clusters found in the TICA topology cannot be interpreted as the actual cortical orientation maps found in cats, primates or humans. In conclusion, Topographic ICA does not reproduce cortical orientation maps.

  13. MAP: an iterative experimental design methodology for the optimization of catalytic search space structure modeling.

    PubMed

    Baumes, Laurent A

    2006-01-01

    One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.

  14. Research on Formation of Microsatellite Communication with Genetic Algorithm

    PubMed Central

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication. PMID:24078796

  15. Research on formation of microsatellite communication with genetic algorithm.

    PubMed

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.

  16. A synthetic biology approach to the development of transcriptional regulatory models and custom enhancer design☆,☆☆

    PubMed Central

    Martinez, Carlos A.; Barr, Kenneth; Kim, Ah-Ram; Reinitz, John

    2013-01-01

    Synthetic biology offers novel opportunities for elucidating transcriptional regulatory mechanisms and enhancer logic. Complex cis-regulatory sequences—like the ones driving expression of the Drosophila even-skipped gene—have proven difficult to design from existing knowledge, presumably due to the large number of protein-protein interactions needed to drive the correct expression patterns of genes in multicellular organisms. This work discusses two novel computational methods for the custom design of enhancers that employ a sophisticated, empirically validated transcriptional model, optimization algorithms, and synthetic biology. These synthetic elements have both utilitarian and academic value, including improving existing regulatory models as well as evolutionary questions. The first method involves the use of simulated annealing to explore the sequence space for synthetic enhancers whose expression output fit a given search criterion. The second method uses a novel optimization algorithm to find functionally accessible pathways between two enhancer sequences. These paths describe a set of mutations wherein the predicted expression pattern does not significantly vary at any point along the path. Both methods rely on a predictive mathematical framework that maps the enhancer sequence space to functional output. PMID:23732772

  17. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    PubMed

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  18. Algebra and topology for applications to physics

    NASA Technical Reports Server (NTRS)

    Rozhkov, S. S.

    1987-01-01

    The principal concepts of algebra and topology are examined with emphasis on applications to physics. In particular, attention is given to sets and mapping; topological spaces and continuous mapping; manifolds; and topological groups and Lie groups. The discussion also covers the tangential spaces of the differential manifolds, including Lie algebras, vector fields, and differential forms, properties of differential forms, mapping of tangential spaces, and integration of differential forms.

  19. Hyers-Ulam stability of a generalized Apollonius type quadratic mapping

    NASA Astrophysics Data System (ADS)

    Park, Chun-Gil; Rassias, Themistocles M.

    2006-10-01

    Let X,Y be linear spaces. It is shown that if a mapping satisfies the following functional equation: then the mapping is quadratic. We moreover prove the Hyers-Ulam stability of the functional equation (0.1) in Banach spaces.

  20. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  1. BESIII Physics Data Storing and Processing on HBase and MapReduce

    NASA Astrophysics Data System (ADS)

    LEI, Xiaofeng; Li, Qiang; Kan, Bowen; Sun, Gongxing; Sun, Zhenyu

    2015-12-01

    In the past years, we have successfully applied Hadoop to high-energy physics analysis. Although, it has not only improved the efficiency of data analysis, but also reduced the cost of cluster building so far, there are still some spaces to be optimized, like inflexible pre-selection, low-efficient random data reading and I/O bottleneck caused by Fuse that is used to access HDFS. In order to change this situation, this paper presents a new analysis platform for high-energy physics data storing and analysing. The data structure is changed from DST tree-like files to HBase according to the features of the data itself and analysis processes, since HBase is more suitable for processing random data reading than DST files and enable HDFS to be accessed directly. A few of optimization measures are taken for the purpose of getting a good performance. A customized protocol is defined for data serializing and desterilizing for the sake of decreasing the storage space in HBase. In order to make full use of locality of data storing in HBase, utilizing a new MapReduce model and a new split policy for HBase regions are proposed in the paper. In addition, a dynamic pluggable easy-to-use TAG (event metadata) based pre-selection subsystem is established. It can assist physicists even to filter out 999%o uninterested data, if the conditions are set properly. This means that a lot of I/O resources can be saved, the CPU usage can be improved and consuming time for data analysis can be reduced. Finally, several use cases are designed, the test results show that the new platform has an excellent performance with 3.4 times faster with pre-selection and 20% faster without preselection, and the new platform is stable and scalable as well.

  2. Peripersonal space representation develops independently from visual experience.

    PubMed

    Ricciardi, Emiliano; Menicagli, Dario; Leo, Andrea; Costantini, Marcello; Pietrini, Pietro; Sinigaglia, Corrado

    2017-12-15

    Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-to-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects' reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects' reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one's own and others' peripersonal space representation.

  3. Intelligent Space Tube Optimization for speeding ground water remedial design.

    PubMed

    Kalwij, Ineke M; Peralta, Richard C

    2008-01-01

    An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.

  4. Rhesus monkeys (Macaca mulatta) map number onto space.

    PubMed

    Drucker, Caroline B; Brannon, Elizabeth M

    2014-07-01

    Humans map number onto space. However, the origins of this association, and particularly the degree to which it depends upon cultural experience, are not fully understood. Here we provide the first demonstration of a number-space mapping in a non-human primate. We trained four adult male rhesus macaques (Macaca mulatta) to select the fourth position from the bottom of a five-element vertical array. Monkeys maintained a preference to choose the fourth position through changes in the appearance, location, and spacing of the vertical array. We next asked whether monkeys show a spatially-oriented number mapping by testing their responses to the same five-element stimulus array rotated ninety degrees into a horizontal line. In these horizontal probe trials, monkeys preferentially selected the fourth position from the left, but not the fourth position from the right. Our results indicate that rhesus macaques map number onto space, suggesting that the association between number and space in human cognition is not purely a result of cultural experience and instead has deep evolutionary roots. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Experiences with Acquiring Highly Redundant Spatial Data to Support Driverless Vehicle Technologies

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C. K.

    2018-05-01

    As vehicle technology is moving towards higher autonomy, the demand for highly accurate geospatial data is rapidly increasing, as accurate maps have a huge potential of increasing safety. In particular, high definition 3D maps, including road topography and infrastructure, as well as city models along the transportation corridors represent the necessary support for driverless vehicles. In this effort, a vehicle equipped with high-, medium- and low-resolution active and passive cameras acquired data in a typical traffic environment, represented here by the OSU campus, where GPS/GNSS data are available along with other navigation sensor data streams. The data streams can be used for two purposes. First, high-definition 3D maps can be created by integrating all the sensory data, and Data Analytics/Big Data methods can be tested for automatic object space reconstruction. Second, the data streams can support algorithmic research for driverless vehicle technologies, including object avoidance, navigation/positioning, detecting pedestrians and bicyclists, etc. Crucial cross-performance analyses on map database resolution and accuracy with respect to sensor performance metrics to achieve economic solution for accurate driverless vehicle positioning can be derived. These, in turn, could provide essential information on optimizing the choice of geospatial map databases and sensors' quality to support driverless vehicle technologies. The paper reviews the data acquisition and primary data processing challenges and performance results.

  6. Distortion correction in EPI at ultra-high-field MRI using PSF mapping with optimal combination of shift detection dimension.

    PubMed

    Oh, Se-Hong; Chung, Jun-Young; In, Myung-Ho; Zaitsev, Maxim; Kim, Young-Bo; Speck, Oliver; Cho, Zang-Hee

    2012-10-01

    Despite its wide use, echo-planar imaging (EPI) suffers from geometric distortions due to off-resonance effects, i.e., strong magnetic field inhomogeneity and susceptibility. This article reports a novel method for correcting the distortions observed in EPI acquired at ultra-high-field such as 7 T. Point spread function (PSF) mapping methods have been proposed for correcting the distortions in EPI. The PSF shift map can be derived either along the nondistorted or the distorted coordinates. Along the nondistorted coordinates more information about compressed areas is present but it is prone to PSF-ghosting artifacts induced by large k-space shift in PSF encoding direction. In contrast, shift maps along the distorted coordinates contain more information in stretched areas and are more robust against PSF-ghosting. In ultra-high-field MRI, an EPI contains both compressed and stretched regions depending on the B0 field inhomogeneity and local susceptibility. In this study, we present a new geometric distortion correction scheme, which selectively applies the shift map with more information content. We propose a PSF-ghost elimination method to generate an artifact-free pixel shift map along nondistorted coordinates. The proposed method can correct the effects of the local magnetic field inhomogeneity induced by the susceptibility effects along with the PSF-ghost artifact cancellation. We have experimentally demonstrated the advantages of the proposed method in EPI data acquisitions in phantom and human brain using 7-T MRI. Copyright © 2011 Wiley Periodicals, Inc.

  7. Deep neural mapping support vector machines.

    PubMed

    Li, Yujian; Zhang, Ting

    2017-09-01

    The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization

    PubMed Central

    2015-01-01

    We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067

  9. From determinism and probability to chaos: chaotic evolution towards philosophy and methodology of chaotic optimization.

    PubMed

    Pei, Yan

    2015-01-01

    We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.

  10. Optimal health and disease management using spatial uncertainty: a geographic characterization of emergent artemisinin-resistant Plasmodium falciparum distributions in Southeast Asia.

    PubMed

    Grist, Eric P M; Flegg, Jennifer A; Humphreys, Georgina; Mas, Ignacio Suay; Anderson, Tim J C; Ashley, Elizabeth A; Day, Nicholas P J; Dhorda, Mehul; Dondorp, Arjen M; Faiz, M Abul; Gething, Peter W; Hien, Tran T; Hlaing, Tin M; Imwong, Mallika; Kindermans, Jean-Marie; Maude, Richard J; Mayxay, Mayfong; McDew-White, Marina; Menard, Didier; Nair, Shalini; Nosten, Francois; Newton, Paul N; Price, Ric N; Pukrittayakamee, Sasithon; Takala-Harrison, Shannon; Smithuis, Frank; Nguyen, Nhien T; Tun, Kyaw M; White, Nicholas J; Witkowski, Benoit; Woodrow, Charles J; Fairhurst, Rick M; Sibley, Carol Hopkins; Guerin, Philippe J

    2016-10-24

    Artemisinin-resistant Plasmodium falciparum malaria parasites are now present across much of mainland Southeast Asia, where ongoing surveys are measuring and mapping their spatial distribution. These efforts require substantial resources. Here we propose a generic 'smart surveillance' methodology to identify optimal candidate sites for future sampling and thus map the distribution of artemisinin resistance most efficiently. The approach uses the 'uncertainty' map generated iteratively by a geostatistical model to determine optimal locations for subsequent sampling. The methodology is illustrated using recent data on the prevalence of the K13-propeller polymorphism (a genetic marker of artemisinin resistance) in the Greater Mekong Subregion. This methodology, which has broader application to geostatistical mapping in general, could improve the quality and efficiency of drug resistance mapping and thereby guide practical operations to eliminate malaria in affected areas.

  11. Performance Measures for Adaptive Decisioning Systems

    DTIC Science & Technology

    1991-09-11

    set to hypothesis space mapping best approximates the known map. Two assumptions, a sufficiently representative training set and the ability of the...successful prediction of LINEXT performance. The LINEXT algorithm above performs the decision space mapping on the training-set ele- ments exactly. For a

  12. Stability of iterative procedures with errors for approximating common fixed points of a couple of q-contractive-like mappings in Banach spaces

    NASA Astrophysics Data System (ADS)

    Zeng, Lu-Chuan; Yao, Jen-Chih

    2006-09-01

    Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.

  13. Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Knudson, Matthew D.; Colby, Mitchell; Tumer, Kagan

    2014-01-01

    Dynamic flight environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal flight paths. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance

  14. Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Colby, Mitchell; Knudson, Matthew D.; Tumer, Kagan

    2014-01-01

    Dynamic environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal paths through these environments. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially with the number of agents in the system. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance.

  15. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  16. Efficient characterization of phase space mapping in axially symmetric optical systems

    NASA Astrophysics Data System (ADS)

    Barbero, Sergio; Portilla, Javier

    2018-01-01

    Phase space mapping, typically between an object and image plane, characterizes an optical system within a geometrical optics framework. We propose a novel conceptual frame to characterize the phase mapping in axially symmetric optical systems for arbitrary object locations, not restricted to a specific object plane. The idea is based on decomposing the phase mapping into a set of bivariate equations corresponding to different values of the radial coordinate on a specific object surface (most likely the entrance pupil). These equations are then approximated through bivariate Chebyshev interpolation at Chebyshev nodes, which guarantees uniform convergence. Additionally, we propose the use of a new concept (effective object phase space), defined as the set of points of the phase space at the first optical element (typically the entrance pupil) that are effectively mapped onto the image surface. The effective object phase space provides, by means of an inclusion test, a way to avoid tracing rays that do not reach the image surface.

  17. Summary of space imagery studies in Utah and Nevada. [using LANDSAT 1, EREP, and Skylab imagery

    NASA Technical Reports Server (NTRS)

    Jensen, M. L.; Laylander, P.

    1975-01-01

    LANDSAT-1, Skylab, and RB-57 imagery acquired within days of each other of the San Rafael swell enabled geological mapping of individual formations of the southern portion of this broad anticlinal feature in eastern Utah. Mapping at a scale of 1/250,000 on an enhanced and enlarged S-190B image resulted in a geological map showing correlative mappable features that are indicated on the geological map of Utah at the same scale. An enhanced enlargement of an S-190B color image at a scale of 1/19,200 of the Bingham Porphyry Copper deposit allowed comparison of a geological map of the area with the space imagery map as fair for the intrusion boundaries and total lack of quality for mapping the sediments. Hydrothermal alteration is only slightly evident on space imagery at Bingham but in the Tintic mining district and the volcanic piles of the Keg and Thomas ranges, Utah, hydrothermal alteration is readily mapped on color enlargements of S-190B (SL-3, T3-3N Tr-2). A mercury soil-gas analyzer was developed for locating hidden mineralized zones which were suggested from space imagery.

  18. How to find and type red/brown dwarf stars in near-infrared imaging space observatories

    NASA Astrophysics Data System (ADS)

    Willemn Holwerda, Benne; Ryan, Russell; Bridge, Joanna; Pirzkal, Nor; Kenworthy, Matthew; Andersen, Morten; Wilkins, Stephen; Trenti, Michele; Meshkat, Tiffany; Bernard, Stephanie; Smit, Renske

    2018-01-01

    Here we evaluate the near-infrared colors of brown dwarfs as observed with four major infrared imaging space observatories: the Hubble Space Telescope (HST), the James Webb Space Telescope (JWST), the EUCLID mission, and the WFIRST telescope. We use the splat ISPEX spectroscopic library to map out the colors of the M, L, and T-type brown dwarfs. We identify which color-color combination is optimal for identifying broad type and which single color is optimal to then identify the subtype (e.g., T0-9). We evaluate each observatory separately as well as the the narrow-field (HST and JWST) and wide-field (EULID and WFIRST) combinations.HST filters used thus far for high-redshift searches (e.g. CANDELS and BoRG) are close to optimal within the available filter combinations. A clear improvement over HST is one of two broad/medium filter combinations on JWST: pairing F140M with either F150W or F162M discriminates well between brown dwarf subtypes. The improvement of JWST the filter set over the HST one is so marked that any combination of HST and JWST filters does not improve the classification.The EUCLID filter set alone performs poorly in terms of typing brown dwarfs and WFIRST performs only marginally better, despite a wider selection of filters. A combined EUCLID and WFIRST observation, using WFIRST's W146 and F062 and EUCLID's Y-band, allows for a much better discrimination between broad brown dwarf categories. In this respect, WFIRST acts as a targeted follow-up observatory for the all-sky EUCLID survey. However, subsequent subtyping with the combination of EUCLID and WFIRST observations remains uncertain due to the lack of medium or narrow-band filters in this wavelength range. We argue that a medium band added to the WFIRST filter selection would greatly improve its ability to preselect against brown dwarfs in high-latitude surveys.

  19. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce

    PubMed Central

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2016-01-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS – a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing. PMID:27617325

  20. Distinguishability notion based on Wootters statistical distance: Application to discrete maps

    NASA Astrophysics Data System (ADS)

    Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.

    2017-08-01

    We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.

  1. Application of Contraction Mappings to the Control of Nonlinear Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Killingsworth, W. R., Jr.

    1972-01-01

    The theoretical and applied aspects of successive approximation techniques are considered for the determination of controls for nonlinear dynamical systems. Particular emphasis is placed upon the methods of contraction mappings and modified contraction mappings. It is shown that application of the Pontryagin principle to the optimal nonlinear regulator problem results in necessary conditions for optimality in the form of a two point boundary value problem (TPBVP). The TPBVP is represented by an operator equation and functional analytic results on the iterative solution of operator equations are applied. The general convergence theorems are translated and applied to those operators arising from the optimal regulation of nonlinear systems. It is shown that simply structured matrices and similarity transformations may be used to facilitate the calculation of the matrix Green functions and the evaluation of the convergence criteria. A controllability theory based on the integral representation of TPBVP's, the implicit function theorem, and contraction mappings is developed for nonlinear dynamical systems. Contraction mappings are theoretically and practically applied to a nonlinear control problem with bounded input control and the Lipschitz norm is used to prove convergence for the nondifferentiable operator. A dynamic model representing community drug usage is developed and the contraction mappings method is used to study the optimal regulation of the nonlinear system.

  2. Open space preservation, property value, and optimal spatial configuration

    Treesearch

    Yong Jiang; Stephen K. Swallow

    2007-01-01

    The public has increasingly demonstrated a strong support for open space preservation. How to finance the socially efficient level of open space with the optimal spatial structure is of high policy relevance to local governments. In this study, we developed a spatially explicit open space model to help identify the socially optimal amount and optimal spatial...

  3. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    PubMed

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity. Copyright © 2017. Published by Elsevier Inc.

  4. Mapping continental-scale biomass burning and smoke palls over the Amazon basin as observed from the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Helfert, Michael R.; Lulla, Kamlesh P.

    1990-01-01

    Space Shuttle and Skylab-3 photography has been used to map the areal extent of Amazonian smoke palls associated with biomass burning (1973-1988). Areas covered with smoke have increased from approximately 300,000 sq km in 1973 to continental-size smoke palls measuring approximately 3,000,000 sq km in 1985 and 1988. Mapping of these smoke palls has been accomplished using space photography mainly acquired during Space Shuttle missions. Astronaut observations of such dynamic and vital environmental phenomena indicate the possibility of integrating the earth observation capabilities of all space platforms in future Global Change research.

  5. TH-A-9A-02: BEST IN PHYSICS (THERAPY) - 4D IMRT Planning Using Highly- Parallelizable Particle Swarm Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modiri, A; Gu, X; Sawant, A

    2014-06-15

    Purpose: We present a particle swarm optimization (PSO)-based 4D IMRT planning technique designed for dynamic MLC tracking delivery to lung tumors. The key idea is to utilize the temporal dimension as an additional degree of freedom rather than a constraint in order to achieve improved sparing of organs at risk (OARs). Methods: The target and normal structures were manually contoured on each of the ten phases of a 4DCT scan acquired from a lung SBRT patient who exhibited 1.5cm tumor motion despite the use of abdominal compression. Corresponding ten IMRT plans were generated using the Eclipse treatment planning system. Thesemore » plans served as initial guess solutions for the PSO algorithm. Fluence weights were optimized over the entire solution space i.e., 10 phases × 12 beams × 166 control points. The size of the solution space motivated our choice of PSO, which is a highly parallelizable stochastic global optimization technique that is well-suited for such large problems. A summed fluence map was created using an in-house B-spline deformable image registration. Each plan was compared with a corresponding, internal target volume (ITV)-based IMRT plan. Results: The PSO 4D IMRT plan yielded comparable PTV coverage and significantly higher dose—sparing for parallel and serial OARs compared to the ITV-based plan. The dose-sparing achieved via PSO-4DIMRT was: lung Dmean = 28%; lung V20 = 90%; spinal cord Dmax = 23%; esophagus Dmax = 31%; heart Dmax = 51%; heart Dmean = 64%. Conclusion: Truly 4D IMRT that uses the temporal dimension as an additional degree of freedom can achieve significant dose sparing of serial and parallel OARs. Given the large solution space, PSO represents an attractive, parallelizable tool to achieve globally optimal solutions for such problems. This work was supported through funding from the National Institutes of Health and Varian Medical Systems. Amit Sawant has research funding from Varian Medical Systems, VisionRT Ltd. and Elekta.« less

  6. Improvement to the scanning electron microscope image adaptive Canny optimization colorization by pseudo-mapping.

    PubMed

    Lo, T Y; Sim, K S; Tso, C P; Nia, M E

    2014-01-01

    An improvement to the previously proposed adaptive Canny optimization technique for scanning electron microscope image colorization is reported. The additional feature, called pseudo-mapping technique, is that the grayscale markings are temporarily mapped to a set of pre-defined pseudo-color map as a mean to instill color information for grayscale colors in chrominance channels. This allows the presence of grayscale markings to be identified; hence optimization colorization of grayscale colors is made possible. This additional feature enhances the flexibility of scanning electron microscope image colorization by providing wider range of possible color enhancement. Furthermore, the nature of this technique also allows users to adjust the luminance intensities of selected region from the original image within certain extent. © 2014 Wiley Periodicals, Inc.

  7. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  8. Linear Mapping of Numbers onto Space Requires Attention

    ERIC Educational Resources Information Center

    Anobile, Giovanni; Cicchini, Guido Marco; Burr, David C.

    2012-01-01

    Mapping of number onto space is fundamental to mathematics and measurement. Previous research suggests that while typical adults with mathematical schooling map numbers veridically onto a linear scale, pre-school children and adults without formal mathematics training, as well as individuals with dyscalculia, show strong compressive,…

  9. The design of free structure granular mappings: the use of the principle of justifiable granularity.

    PubMed

    Pedrycz, Witold; Al-Hmouz, Rami; Morfeq, Ali; Balamash, Abdullah

    2013-12-01

    The study introduces a concept of mappings realized in presence of information granules and offers a design framework supporting the formation of such mappings. Information granules are conceptually meaningful entities formed on a basis of a large number of experimental input–output numeric data available for the construction of the model. We develop a conceptually and algorithmically sound way of forming information granules. Considering the directional nature of the mapping to be formed, this directionality aspect needs to be taken into account when developing information granules. The property of directionality implies that while the information granules in the input space could be constructed with a great deal of flexibility, the information granules formed in the output space have to inherently relate to those built in the input space. The input space is granulated by running a clustering algorithm; for illustrative purposes, the focus here is on fuzzy clustering realized with the aid of the fuzzy C-means algorithm. The information granules in the output space are constructed with the aid of the principle of justifiable granularity (being one of the underlying fundamental conceptual pursuits of Granular Computing). The construct exhibits two important features. First, the constructed information granules are formed in the presence of information granules already constructed in the input space (and this realization is reflective of the direction of the mapping from the input to the output space). Second, the principle of justifiable granularity does not confine the realization of information granules to a single formalism such as fuzzy sets but helps form the granules expressed any required formalism of information granulation. The quality of the granular mapping (viz. the mapping realized for the information granules formed in the input and output spaces) is expressed in terms of the coverage criterion (articulating how well the experimental data are “covered” by information granules produced by the granular mapping for any input experimental data). Some parametric studies are reported by quantifying the performance of the granular mapping (expressed in terms of the coverage and specificity criteria) versus the values of a certain parameters utilized in the construction of output information granules through the principle of justifiable granularity. The plots of coverage–specificity dependency help determine a knee point and reach a sound compromise between these two conflicting requirements imposed on the quality of the granular mapping. Furthermore, quantified is the quality of the mapping with regard to the number of information granules (implying a certain granularity of the mapping). A series of experiments is reported as well.

  10. Engineering Feasibility and Trade Studies for the NASA/VSGC MicroMaps Space Mission

    NASA Technical Reports Server (NTRS)

    Abdelkhalik, Ossama O.; Nairouz, Bassem; Weaver, Timothy; Newman, Brett

    2003-01-01

    Knowledge of airborne CO concentrations is critical for accurate scientific prediction of global scale atmospheric behavior. MicroMaps is an existing NASA owned gas filter radiometer instrument designed for space-based measurement of atmospheric CO vertical profiles. Due to programmatic changes, the instrument does not have access to the space environment and is in storage. MicroMaps hardware has significant potential for filling a critical scientific need, thus motivating concept studies for new and innovative scientific spaceflight missions that would leverage the MicroMaps heritage and investment, and contribute to new CO distribution data. This report describes engineering feasibility and trade studies for the NASA/VSGC MicroMaps Space Mission. Conceptual studies encompass: 1) overall mission analysis and synthesis methodology, 2) major subsystem studies and detailed requirements development for an orbital platform option consisting of a small, single purpose spacecraft, 3) assessment of orbital platform option consisting of the International Space Station, and 4) survey of potential launch opportunities for gaining assess to orbit. Investigations are of a preliminary first-order nature. Results and recommendations from these activities are envisioned to support future MicroMaps Mission design decisions regarding program down select options leading to more advanced and mature phases.

  11. Mapping the landscape of metabolic goals of a cell

    DOE PAGES

    Zhao, Qi; Stettner, Arion I.; Reznik, Ed; ...

    2016-05-23

    Here, genome-scale flux balance models of metabolism provide testable predictions of all metabolic rates in an organism, by assuming that the cell is optimizing a metabolic goal known as the objective function. We introduce an efficient inverse flux balance analysis (invFBA) approach, based on linear programming duality, to characterize the space of possible objective functions compatible with measured fluxes. After testing our algorithm on simulated E. coli data and time-dependent S. oneidensis fluxes inferred from gene expression, we apply our inverse approach to flux measurements in long-term evolved E. coli strains, revealing objective functions that provide insight into metabolic adaptationmore » trajectories.« less

  12. Using semantic data modeling techniques to organize an object-oriented database for extending the mass storage model

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik

    1991-01-01

    A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.

  13. DISCO: Distance and Spectrum Correlation Optimization Alignment for Two Dimensional Gas Chromatography Time-of-Flight Mass Spectrometry-based Metabolomics

    PubMed Central

    Wang, Bing; Fang, Aiqin; Heim, John; Bogdanov, Bogdan; Pugh, Scott; Libardoni, Mark; Zhang, Xiang

    2010-01-01

    A novel peak alignment algorithm using a distance and spectrum correlation optimization (DISCO) method has been developed for two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) based metabolomics. This algorithm uses the output of the instrument control software, ChromaTOF, as its input data. It detects and merges multiple peak entries of the same metabolite into one peak entry in each input peak list. After a z-score transformation of metabolite retention times, DISCO selects landmark peaks from all samples based on both two-dimensional retention times and mass spectrum similarity of fragment ions measured by Pearson’s correlation coefficient. A local linear fitting method is employed in the original two-dimensional retention time space to correct retention time shifts. A progressive retention time map searching method is used to align metabolite peaks in all samples together based on optimization of the Euclidean distance and mass spectrum similarity. The effectiveness of the DISCO algorithm is demonstrated using data sets acquired under different experiment conditions and a spiked-in experiment. PMID:20476746

  14. Gravitons as Embroidery on the Weave

    NASA Astrophysics Data System (ADS)

    Iwasaki, Junichi; Rovelli, Carlo

    We investigate the physical interpretation of the loop states that appear in the loop representation of quantum gravity. By utilizing the “weave” state, which has been recently introduced as a quantum description of the microstructure of flat space, we analyze the relation between loop states and graviton states. This relation determines a linear map M from the state-space of the nonperturbative theory (loop space) into the state-space of the linearized theory (Fock space). We present an explicit form of this map, and a preliminary investigation of its properties. The existence of such a map indicates that the full nonperturbative quantum theory includes a sector that describes the same physics as (the low energy regimes of) the linearized theory, namely gravitons on flat space.

  15. Accelerated Brain DCE-MRI Using Iterative Reconstruction With Total Generalized Variation Penalty for Quantitative Pharmacokinetic Analysis: A Feasibility Study.

    PubMed

    Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng

    2017-08-01

    To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suchanecki, Z.; Antoniou, I.; Tasaki, S.

    We consider the problem of rigging for the Koopman operators of the Renyi and the baker maps. We show that the rigged Hilbert space for the Renyi maps has some of the properties of a strict inductive limit and give a detailed description of the rigged Hilbert space for the baker maps. {copyright} {ital 1996 American Institute of Physics.}

  17. Ocean Thermal Feature Recognition, Discrimination and Tracking Using Infrared Satellite Imagery

    DTIC Science & Technology

    1991-06-01

    rejected if the temperature in the mapped area exceeds classification criteria ............................... 17 viii 2.6 Ideal feature space mapping from...in seconds, and 1P is the side dimension of the pixel in meters. Figure 2.6: Ideal feature space mapping from pattern tile - search tile comparison. 20

  18. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    PubMed

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  19. Sensitive and specific detection of viable Mycobacterium avium subsp. paratuberculosis in raw milk by the peptide-mediated magnetic separation-phage assay.

    PubMed

    Foddai, A C G; Grant, I R

    2017-05-01

    To validate an optimized peptide-mediated magnetic separation (PMS)-phage assay for detection of viable Mycobacterium avium subsp. paratuberculosis (MAP) in milk. Inclusivity, specificity and limit of detection 50% (LOD 50 ) of the optimized PMS-phage assay were assessed. Plaques were obtained for all 43 MAP strains tested. Of 12 other Mycobacterium sp. tested, only Mycobacterium bovis BCG produced small numbers of plaques. LOD 50 of the PMS-phage assay was 0·93 MAP cells per 50 ml milk, which was better than both PMS-qPCR and PMS-culture. When individual milks (n = 146) and bulk tank milk (BTM, n = 22) obtained from Johne's affected herds were tested by the PMS-phage assay, viable MAP were detected in 31 (21·2%) of 146 individual milks and 13 (59·1%) of 22 BTM, with MAP numbers detected ranging from 6-948 plaque-forming-units per 50 ml milk. PMS-qPCR and PMS-MGIT culture proved to be less sensitive tests than the PMS-phage assay. The optimized PMS-phage assay is the most sensitive and specific method available for the detection of viable MAP in milk. Further work is needed to streamline the PMS-phage assay, because the assay's multistep format currently makes it unsuitable for adoption by the dairy industry as a screening test. The inclusivity (ability to detect all MAP strains), specificity (ability to detect only MAP) and detection sensitivity (ability to detect low numbers of MAP) of the optimized PMS-phage assay have been comprehensively demonstrated for the first time. © 2017 The Society for Applied Microbiology.

  20. Summary of space imagery studies in Utah and Nevada

    NASA Technical Reports Server (NTRS)

    Jensen, M. L. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. An enhanced enlargement of a S190B color image at a scale of 1/19,200 of the Bingham porphyry copper deposit has compared a geological map of the area with the space imagery map as fair for the intrusion boundaries and total lack of quality for mapping the sediments. Hydrothermal alteration is only slightly evident on space imagery at Bingham, but in the Tintic mining district and the volcanic piles of the Keg and Thomas ranges, Utah, hydrothermal alteration is readily mapped on color enlargements of S190B. Several sites of calderas were recognized and new ones located on space imagery. One of the tools developed is a mercury soil-gas analyzer that is becoming significant as an aid in locating hidden mineralized zones which were suggested from space imagery. In addition, this tool is a prime aid in locating and better delineating geothermal sites.

  1. Mobile robot motion estimation using Hough transform

    NASA Astrophysics Data System (ADS)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  2. A Lightning Channel Retrieval Algorithm for the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, William; Arnold, James E. (Technical Monitor)

    2002-01-01

    A new multi-station VHF time-of-arrival (TOA) antenna network is, at the time of this writing, coming on-line in Northern Alabama. The network, called the Lightning Mapping Array (LMA), employs GPS timing and detects VHF radiation from discrete segments (effectively point emitters) that comprise the channel of lightning strokes within cloud and ground flashes. The network will support on-going ground validation activities of the low Earth orbiting Lightning Imaging Sensor (LIS) satellite developed at NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama. It will also provide for many interesting and detailed studies of the distribution and evolution of thunderstorms and lightning in the Tennessee Valley, and will offer many interesting comparisons with other meteorological/geophysical wets associated with lightning and thunderstorms. In order to take full advantage of these benefits, it is essential that the LMA channel mapping accuracy (in both space and time) be fully characterized and optimized. In this study, a new revised channel mapping retrieval algorithm is introduced. The algorithm is an extension of earlier work provided in Koshak and Solakiewicz (1996) in the analysis of the NASA Kennedy Space Center (KSC) Lightning Detection and Ranging (LDAR) system. As in the 1996 study, direct algebraic solutions are obtained by inverting a simple linear system of equations, thereby making computer searches through a multi-dimensional parameter domain of a Chi-Squared function unnecessary. However, the new algorithm is developed completely in spherical Earth-centered coordinates (longitude, latitude, altitude), rather than in the (x, y, z) cartesian coordinates employed in the 1996 study. Hence, no mathematical transformations from (x, y, z) into spherical coordinates are required (such transformations involve more numerical error propagation, more computer program coding, and slightly more CPU computing time). The new algorithm also has a more realistic definition of source altitude that accounts for Earth oblateness (this can become important for sources that are hundreds of kilometers away from the network). In addition, the new algorithm is being applied to analyze computer simulated LMA datasets in order to obtain detailed location/time retrieval error maps for sources in and around the LMA network. These maps will provide a more comprehensive analysis of retrieval errors for LMA than the 1996 study did of LDAR retrieval errors. Finally, we note that the new algorithm can be applied to LDAR, and essentially any other multi-station TWA network that depends on direct line-of-site antenna excitation.

  3. Influence of Intramyocardial Adipose Tissue on the Accuracy of Endocardial Contact Mapping of the Chronic Myocardial Infarction Substrate.

    PubMed

    Samanta, Rahul; Kumar, Saurabh; Chik, William; Qian, Pierre; Barry, Michael A; Al Raisi, Sara; Bhaskaran, Abhishek; Farraha, Melad; Nadri, Fazlur; Kizana, Eddy; Thiagalingam, Aravinda; Kovoor, Pramesh; Pouliopoulos, Jim

    2017-10-01

    Recent studies have demonstrated that intramyocardial adipose tissue (IMAT) may contribute to ventricular electrophysiological remodeling in patients with chronic myocardial infarction. Using an ovine model of myocardial infarction, we aimed to determine the influence of IMAT on scar tissue identification during endocardial contact mapping and optimal voltage-based mapping criteria for defining IMAT dense regions. In 7 sheep, left ventricular endocardial and transmural mapping was performed 84 weeks (15-111 weeks) post-myocardial infarction. Spearman rank correlation coefficient was used to assess the relationship between endocardial contact electrogram amplitude and histological composition of myocardium. Receiver operator characteristic curves were used to derive optimal electrogram thresholds for IMAT delineation during endocardial mapping and to describe the use of endocardial mapping for delineation of IMAT dense regions within scar. Endocardial electrogram amplitude correlated significantly with IMAT (unipolar r =-0.48±0.12, P <0.001; bipolar r =-0.45±0.22, P =0.04) but not collagen (unipolar r =-0.36±0.24, P =0.13; bipolar r =-0.43±0.31, P =0.16). IMAT dense regions of myocardium reliably identified using endocardial mapping with thresholds of <3.7 and <0.6 mV, respectively, for unipolar, bipolar, and combined modalities (single modality area under the curve=0.80, P <0.001; combined modality area under the curve=0.84, P <0.001). Unipolar mapping using optimal thresholding remained significantly reliable (area under the curve=0.76, P <0.001) during mapping of IMAT, confined to putative scar border zones (bipolar amplitude, 0.5-1.5 mV). These novel findings enhance our understanding of the confounding influence of IMAT on endocardial scar mapping. Combined bipolar and unipolar voltage mapping using optimal thresholds may be useful for delineating IMAT dense regions of myocardium, in postinfarct cardiomyopathy. © 2017 American Heart Association, Inc.

  4. Optimization of a charge-state analyzer for electron cyclotron resonance ion source beams.

    PubMed

    Saminathan, S; Beijers, J P M; Kremers, H R; Mironov, V; Mulder, J; Brandenburg, S

    2012-07-01

    A detailed experimental and simulation study of the extraction of a 24 keV He(+) beam from an ECR ion source and the subsequent beam transport through an analyzing magnet is presented. We find that such a slow ion beam is very sensitive to space-charge forces, but also that the neutralization of the beam's space charge by secondary electrons is virtually complete for beam currents up to at least 0.5 mA. The beam emittance directly behind the extraction system is 65 π mm mrad and is determined by the fact that the ion beam is extracted in the strong magnetic fringe field of the ion source. The relatively large emittance of the beam and its non-paraxiality lead, in combination with a relatively small magnet gap, to significant beam losses and a five-fold increase of the effective beam emittance during its transport through the analyzing magnet. The calculated beam profile and phase-space distributions in the image plane of the analyzing magnet agree well with measurements. The kinematic and magnet aberrations have been studied using the calculated second-order transfer map of the analyzing magnet, with which we can reproduce the phase-space distributions of the ion beam behind the analyzing magnet. Using the transfer map and trajectory calculations we have worked out an aberration compensation scheme based on the addition of compensating hexapole components to the main dipole field by modifying the shape of the poles. The simulations predict that by compensating the kinematic and geometric aberrations in this way and enlarging the pole gap the overall beam transport efficiency can be increased from 16% to 45%.

  5. InAs1-xSbx Alloys with Native Llattice Parameters Grown on Compositionally Graded Buffers: Structural and Optical Properties

    DTIC Science & Technology

    2013-08-15

    InAsSb, compositionally graded buffer, MBE, infrared, minority carrier lifetime, reciprocal space mapping Ding Wang, Dmitry Donetsky, Youxi Lin, Gela...infrared, minority carrier lifetime; reciprocal space mapping . Introduction GaSb based Ill-Y materials are widely used in the development of mid... space mapping (RSM) at the symmetric (004) and asymmetric (335) Bragg reflections. Figure 3 presents a set of RSM measurements for a structure

  6. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella

    2018-03-01

    Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.

  7. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    PubMed

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology

  8. Customised City Maps in Mobile Applications for Senior Citizens.

    PubMed

    Reins, Frank; Berker, Frank; Heck, Helmut

    2017-01-01

    Map services should be used in mobile applications for senior citizens. Do the commonly used map services meet the needs of elderly people? - Exemplarily, the contrast ratios of common maps in comparison to an optimized custom rendered map are examined in the paper.

  9. Landsat Time-Series Analysis Opens New Approaches for Regional Glacier Mapping

    NASA Astrophysics Data System (ADS)

    Winsvold, S. H.; Kääb, A.; Nuth, C.; Altena, B.

    2016-12-01

    The archive of Landsat satellite scenes is important for mapping of glaciers, especially as it represents the longest running and continuous satellite record of sufficient resolution to track glacier changes over time. Contributing optical sensors newly launched (Landsat 8 and Sentinel-2A) or upcoming in the near future (Sentinel-2B), will promote very high temporal resolution of optical satellite images especially in high-latitude regions. Because of the potential that lies within such near-future dense time series, methods for mapping glaciers from space should be revisited. We present application scenarios that utilize and explore dense time series of optical data for automatic mapping of glacier outlines and glacier facies. Throughout the season, glaciers display a temporal sequence of properties in optical reflection as the seasonal snow melts away, and glacier ice appears in the ablation area and firn in the accumulation area. In one application scenario presented we simulated potential future seasonal resolution using several years of Landsat 5TM/7ETM+ data, and found a sinusoidal evolution of the spectral reflectance for on-glacier pixels throughout a year. We believe this is because of the short wave infrared band and its sensitivity to snow grain size. The parameters retrieved from the fitting sinus curve can be used for glacier mapping purposes, thus we also found similar results using e.g. the mean of summer band ratio images. In individual optical mapping scenes, conditions will vary (e.g., snow, ice, and clouds) and will not be equally optimal over the entire scene. Using robust statistics on stacked pixels reveals a potential for synthesizing optimal mapping scenes from a temporal stack, as we present in a further application scenario. The dense time series available from satellite imagery will also promote multi-temporal and multi-sensor based analyses. The seasonal pattern of snow and ice on a glacier seen in the optical time series can in the summer season also be observed using radar backscatter series. Optical sensors reveal the reflective properties at the surface, while radar sensors may penetrate the surface revealing properties from a certain volume.In an outlook to this contribution we have explored how we can combine information from SAR and optical sensor systems for different purposes.

  10. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  11. Astronaut Kevin Chilton displays map of Scandinavia on flight deck

    NASA Image and Video Library

    1994-04-14

    STS059-16-032 (9-20 April 1994) --- Astronaut Kevin P. Chilton, pilot, displays a map of Scandinavia on the Space Shuttle Endeavour's flight deck. Large scale maps such as this were used by the crew to locate specific sites of interest to the Space Radar Laboratory scientists. The crew then photographed the sites at the same time as the radar in the payload bay imaged them. Chilton was joined in space by five other NASA astronauts for a week and a half of support to the Space Radar Laboratory (SRL-1) mission and other tasks.

  12. 3D models mapping optimization through an integrated parameterization approach: cases studies from Ravenna

    NASA Astrophysics Data System (ADS)

    Cipriani, L.; Fantini, F.; Bertacchi, S.

    2014-06-01

    Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.

  13. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    PubMed

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.

  14. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications

    PubMed Central

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971

  15. The Structure of Optimum Interpolation Functions.

    DTIC Science & Technology

    1983-02-01

    Daniel F. Merriam, ed., Plenum Press, 1970. 2. Hiroshi Akima, "Comments on ’Optimal Contour Mapping Using Universal Kriging’ by Ricardo 0. Olea ," (with...Kriging," Mathematical Geology 14 (1982), 249-257. 21 27. Ricardo 0. Olea , "Optimal Contour Mapping Using Universal Kriging," J. of Geophysical Res. 79

  16. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  17. Mapping experiment with space station

    NASA Technical Reports Server (NTRS)

    Wu, Sherman S. C.

    1987-01-01

    Mapping the earth from space stations can be approached in two areas. One is to collect gravity data for defining a new topographic datum using the earth's gravitational field in terms of spherical harmonics. The other, which should be considered as a very significant contribution of the Space Station, is to search and explore techniques of mapping the earth's topography using either optical or radar images with or without references to ground control points. Geodetic position of ground control points can be predetermined by the Global Positioning System (GPS) for the mapping experiment with the Space Station. It is proposed to establish four ground control points in North America or Africa (including the Sahara Desert). If this experiment should be successfully accomplished, it may also be applied to the defense charting service.

  18. JIGSAW: Joint Inhomogeneity estimation via Global Segment Assembly for Water-fat separation.

    PubMed

    Lu, Wenmiao; Lu, Yi

    2011-07-01

    Water-fat separation in magnetic resonance imaging (MRI) is of great clinical importance, and the key to uniform water-fat separation lies in field map estimation. This work deals with three-point field map estimation, in which water and fat are modelled as two single-peak spectral lines, and field inhomogeneities shift the spectrum by an unknown amount. Due to the simplified spectrum modelling, there exists inherent ambiguity in forming field maps from multiple locally feasible field map values at each pixel. To resolve such ambiguity, spatial smoothness of field maps has been incorporated as a constraint of an optimization problem. However, there are two issues: the optimization problem is computationally intractable and even when it is solved exactly, it does not always separate water and fat images. Hence, robust field map estimation remains challenging in many clinically important imaging scenarios. This paper proposes a novel field map estimation technique called JIGSAW. It extends a loopy belief propagation (BP) algorithm to obtain an approximate solution to the optimization problem. The solution produces locally smooth segments and avoids error propagation associated with greedy methods. The locally smooth segments are then assembled into a globally consistent field map by exploiting the periodicity of the feasible field map values. In vivo results demonstrate that JIGSAW outperforms existing techniques and produces correct water-fat separation in challenging imaging scenarios.

  19. Familiarity expands space and contracts time.

    PubMed

    Jafarpour, Anna; Spiers, Hugo

    2017-01-01

    When humans draw maps, or make judgments about travel-time, their responses are rarely accurate and are often systematically distorted. Distortion effects on estimating time to arrival and the scale of sketch-maps reveal the nature of mental representation of time and space. Inspired by data from rodent entorhinal grid cells, we predicted that familiarity to an environment would distort representations of the space by expanding the size of it. We also hypothesized that travel-time estimation would be distorted in the same direction as space-size, if time and space rely on the same cognitive map. We asked international students, who had lived at a college in London for 9 months, to sketch a south-up map of their college district, estimate travel-time to destinations within the area, and mark their everyday walking routes. We found that while estimates for sketched space were expanded with familiarity, estimates of the time to travel through the space were contracted with familiarity. Thus, we found dissociable responses to familiarity in representations of time and space. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc.

  20. A Control Algorithm for Chaotic Physical Systems

    DTIC Science & Technology

    1991-10-01

    revision expands the grid to cover the entire area of any attractor that is present. 5 Map Selection The final choices of the state- space mapping process...interval h?; overrange R0 ; control parameter interval AkO and range [kbro, khigh]; iteration depth. "* State- space mapping : 1. Set up grid by expanding

  1. Time-to-space mapping of a continuous light wave with picosecond time resolution based on an electrooptic beam deflection.

    PubMed

    Hisatake, S; Kobayashi, T

    2006-12-25

    We demonstrate a time-to-space mapping of an optical signal with a picosecond time resolution based on an electrooptic beam deflection. A time axis of the optical signal is mapped into a spatial replica by the deflection. We theoretically derive a minimum time resolution of the time-to-space mapping and confirm it experimentally on the basis of the pulse width of the optical pulses picked out from the deflected beam through a narrow slit which acts as a temporal window. We have achieved the minimum time resolution of 1.6+/-0.2 ps.

  2. Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.

    2010-12-01

    Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.

  3. Face sketch recognition based on edge enhancement via deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.

  4. Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking.

    PubMed

    Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen

    2017-01-01

    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.

  5. Multidimensionally encoded magnetic resonance imaging.

    PubMed

    Lin, Fa-Hsuan

    2013-07-01

    Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.

  6. Generalized contractive mappings and weakly α-admissible pairs in G-metric spaces.

    PubMed

    Hussain, N; Parvaneh, V; Hoseini Ghoncheh, S J

    2014-01-01

    The aim of this paper is to present some coincidence and common fixed point results for generalized (ψ, φ)-contractive mappings using partially weakly G-α-admissibility in the setup of G-metric space. As an application of our results, periodic points of weakly contractive mappings are obtained. We also derive certain new coincidence point and common fixed point theorems in partially ordered G-metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results.

  7. Generalized Contractive Mappings and Weakly α-Admissible Pairs in G-Metric Spaces

    PubMed Central

    Hussain, N.; Parvaneh, V.; Hoseini Ghoncheh, S. J.

    2014-01-01

    The aim of this paper is to present some coincidence and common fixed point results for generalized (ψ, φ)-contractive mappings using partially weakly G-α-admissibility in the setup of G-metric space. As an application of our results, periodic points of weakly contractive mappings are obtained. We also derive certain new coincidence point and common fixed point theorems in partially ordered G-metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results. PMID:25202742

  8. A diagnostic algorithm to optimize data collection and interpretation of Ripple Maps in atrial tachycardias.

    PubMed

    Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa

    2015-11-15

    Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interest<100% of cycle length (CL); <95% tachycardia CL mapped; variability of CL and/or unstable fiducial reference marker; and suboptimal bar height and scar settings. A data collection and map interpretation algorithm has been developed to optimize Ripple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. OBLIMAP 2.0: a fast climate model-ice sheet model coupler including online embeddable mapping routines

    NASA Astrophysics Data System (ADS)

    Reerink, Thomas J.; van de Berg, Willem Jan; van de Wal, Roderik S. W.

    2016-11-01

    This paper accompanies the second OBLIMAP open-source release. The package is developed to map climate fields between a general circulation model (GCM) and an ice sheet model (ISM) in both directions by using optimal aligned oblique projections, which minimize distortions. The curvature of the surfaces of the GCM and ISM grid differ, both grids may be irregularly spaced and the ratio of the grids is allowed to differ largely. OBLIMAP's stand-alone version is able to map data sets that differ in various aspects on the same ISM grid. Each grid may either coincide with the surface of a sphere, an ellipsoid or a flat plane, while the grid types might differ. Re-projection of, for example, ISM data sets is also facilitated. This is demonstrated by relevant applications concerning the major ice caps. As the stand-alone version also applies to the reverse mapping direction, it can be used as an offline coupler. Furthermore, OBLIMAP 2.0 is an embeddable GCM-ISM coupler, suited for high-frequency online coupled experiments. A new fast scan method is presented for structured grids as an alternative for the former time-consuming grid search strategy, realising a performance gain of several orders of magnitude and enabling the mapping of high-resolution data sets with a much larger number of grid nodes. Further, a highly flexible masked mapping option is added. The limitation of the fast scan method with respect to unstructured and adaptive grids is discussed together with a possible future parallel Message Passing Interface (MPI) implementation.

  10. Contributions to integrative knowledge of West Nile virus reported in Romania - methods and tools for managing health-environment relationship at different spatial and temporal scales

    NASA Astrophysics Data System (ADS)

    Baltesiu, L.; Gomoiu, M. T.; Mudura, R.; Nicolescu, G.; Purcarea-Ciulacu, V.

    2012-04-01

    After 1990 there were environmental changes at national, European and global level which led to the emergence and re-emergence of infectious diseases. Among these diseases, those transmitted by vectors were installed on very large areas where pathogens entered the complex transmission cycles within the local ecosystems. Environmental changes were generated by climatic (temperature and precipitation), geomorphologic (altitude) and anthropogenic (land cover / land use) changes. Due to these environmental changes it became necessary to anticipate, prevent and control the epidemics in order to avoid major crises of natural and socio-economic systems. In these circumstances, the risk of re-emergence of West Nile virus infection increased, thus becoming a public health problem for Romania. Our research consisted in assessing this risk, depending on environmental changes that can influence the presence and space-time distribution as well as the dynamics of the elements of virus transmission cycle. Study areas were selected so that they should meet, on the one hand, very different natural ecosystems and on the other hand should include continuously changing anthropogenic ecosystems that provide optimal conditions for the vector-borne West Nile virus. These areas were: the Danube Delta including Razim-Sinoe complex (Tulcea County), Bucharest Metropolitan Area (BMA) (Bucharest and Ilfov & Giurgiu Counties). The Danube Delta lagoon area is the gateway to West Nile virus in Romania. During the neurological infection epidemic with West Nile virus in 1996, in BMA were recorded 60% of the total number of human cases. For the period 2009 - 2011 the authors developed risk maps to West Nile virus vectors to vertebrate hosts depending on climatic, geomorphologic and anthropogenic changes. Maps were made using ArcGis - ArcMap software, depending on the mean annual temperature and precipitation. We were used by the altitude risk map the hypsographic map of Romania and for the risk map by land cover/land use, information provided by the Land Cover/Land Use Classification System for Romania (2003) data. The four (4) types of risk maps (depending on temperature, precipitation, altitude and cover / land use) were overlaid, thus achieving the final risk maps. Also, space-time distribution maps were made at national and regional level for vertebrate hosts and vectors. On the basis of the information forecasts are developed concerning the occurrence of these diseases in different types of ecosystems, as well as early warnings and strategies at national and European level in order to protect the human population.

  11. Contractive type non-self mappings on metric spaces of hyperbolic type

    NASA Astrophysics Data System (ADS)

    Ciric, Ljubomir B.

    2006-05-01

    Let (X,d) be a metric space of hyperbolic type and K a nonempty closed subset of X. In this paper we study a class of mappings from K into X (not necessarily self-mappings on K), which are defined by the contractive condition (2.1) below, and a class of pairs of mappings from K into X which satisfy the condition (2.28) below. We present fixed point and common fixed point theorems which are generalizations of the corresponding fixed point theorems of Ciric [L.B. Ciric, Quasi-contraction non-self mappings on Banach spaces, Bull. Acad. Serbe Sci. Arts 23 (1998) 25-31; L.B. Ciric, J.S. Ume, M.S. Khan, H.K.T. Pathak, On some non-self mappings, Math. Nachr. 251 (2003) 28-33], Rhoades [B.E. Rhoades, A fixed point theorem for some non-self mappings, Math. Japon. 23 (1978) 457-459] and many other authors. Some examples are presented to show that our results are genuine generalizations of known results from this area.

  12. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.

  13. Numerical Studies of High-Intensity Injection Painting for Project X

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drozhdin, A.I.; Vorobiev, L.G.; Johnson, D.E.

    Injection phase space painting enables the mitigation of space charge and stability issues, and will be indispensable for the Project-X at Fermilab [1], delivering high-intensity proton beams to HEP experiments. Numerical simulations of multi-turn phase space painting have been performed for the FNAL Recycler Ring, including a self-consistent space charge model. The goal of our studies was to study the injection painting with inclusion of 3D space charge, using the ORBIT tracking code. In a current scenario the painting lasts for 110 turns, twice faster, than we considered in this paper. The optimal wave-forms for painting kickers, which ensure themore » flatter phase distributions, should be found. So far we used a simplified model for painting kicker strength (implemented as the 'ideal bump' in ORBIT). We will include a more realistic field map for the chicane magnets. Additional stripping simulations will be combined. We developed a block for longitudinal painting, which works with arbitrary notches in incoming micro-bunch buckets. The appropriate choice of the amplitude of the second harmonic of RF field will help to flatten the RF-bucket contours, as was demonstrated in 1D simulations. Non-linear lattice issue will be also addressed.« less

  14. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    PubMed Central

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  15. [Contribution of remote sensing to malaria control].

    PubMed

    Machault, V; Pages, F; Rogier, C

    2009-04-01

    Despite national and international efforts, malaria remains a major public health problem and the fight to control the disease is confronted by numerous hurdles. Study of space and time dynamics of malaria is necessary as a basis for making appropriate decision and prioritizing intervention including in areas where field data are rare and sanitary information systems are inadequate. Evaluation of malarial risk should also help anticipate the risk of epidemics as a basis for early warning systems. Since 1960-70 civilian satellites launched for earth observation have been providing information for the measuring or evaluating geo-climatic and anthropogenic factors related to malaria transmission and burden. Remotely sensed data gathered for several civilian or military studies have allowed setup of entomological, parasitological, and epidemiological risk models and maps for rural and urban areas. Mapping of human populations at risk has also benefited from remotely sensing. The results of the published studies show that remote sensing is a suitable tool for optimizing planning, efficacy and efficiency of malaria control.

  16. Evolutionary optimization of biopolymers and sequence structure maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reidys, C.M.; Kopp, S.; Schuster, P.

    1996-06-01

    Searching for biopolymers having a predefined function is a core problem of biotechnology, biochemistry and pharmacy. On the level of RNA sequences and their corresponding secondary structures we show that this problem can be analyzed mathematically. The strategy will be to study the properties of the RNA sequence to secondary structure mapping that is essential for the understanding of the search process. We show that to each secondary structure s there exists a neutral network consisting of all sequences folding into s. This network can be modeled as a random graph and has the following generic properties: it is densemore » and has a giant component within the graph of compatible sequences. The neutral network percolates sequence space and any two neutral nets come close in terms of Hamming distance. We investigate the distribution of the orders of neutral nets and show that above a certain threshold the topology of neutral nets allows to find practically all frequent secondary structures.« less

  17. Response Surface Methods for Spatially-Resolved Optical Measurement Techniques

    NASA Technical Reports Server (NTRS)

    Danehy, P. M.; Dorrington, A. A.; Cutler, A. D.; DeLoach, R.

    2003-01-01

    Response surface methods (or methodology), RSM, have been applied to improve data quality for two vastly different spatial ly-re solved optical measurement techniques. In the first application, modern design of experiments (MDOE) methods, including RSM, are employed to map the temperature field in a direct-connect supersonic combustion test facility at NASA Langley Research Center. The laser-based measurement technique known as coherent anti-Stokes Raman spectroscopy (CARS) is used to measure temperature at various locations in the combustor. RSM is then used to develop temperature maps of the flow. Even though the temperature fluctuations at a single point in the flowfield have a standard deviation on the order of 300 K, RSM provides analytic fits to the data having 95% confidence interval half width uncertainties in the fit as low as +/-30 K. Methods of optimizing future CARS experiments are explored. The second application of RSM is to quantify the shape of a 5-meter diameter, ultra-light, inflatable space antenna at NASA Langley Research Center.

  18. Blind decomposition of Herschel-HIFI spectral maps of the NGC 7023 nebula

    NASA Astrophysics Data System (ADS)

    Berné, O.; Joblin, C.; Deville, Y.; Pilleri, P.; Pety, J.; Teyssier, D.; Gerin, M.; Fuente, A.

    2012-12-01

    Large spatial-spectral surveys are more and more common in astronomy. This calls for the need of new methods to analyze such mega- to giga-pixel data-cubes. In this paper we present a method to decompose such observations into a limited and comprehensive set of components. The original data can then be interpreted in terms of linear combinations of these components. The method uses non-negative matrix factorization (NMF) to extract latent spectral end-members in the data. The number of needed end-members is estimated based on the level of noise in the data. A Monte-Carlo scheme is adopted to estimate the optimal end-members, and their standard deviations. Finally, the maps of linear coefficients are reconstructed using non-negative least squares. We apply this method to a set of hyperspectral data of the NGC 7023 nebula, obtained recently with the HIFI instrument onboard the Herschel space observatory, and provide a first interpretation of the results in terms of 3-dimensional dynamical structure of the region.

  19. Map of Pluto Surface

    NASA Image and Video Library

    1998-03-28

    This image-based surface map of Pluto was assembled by computer image processing software from four separate images of Pluto disk taken with the European Space Agency Faint Object Camera aboard NASA Hubble Space Telescope.

  20. Putting Space Back on the Map: Globalisation, Place and Identity

    ERIC Educational Resources Information Center

    Usher, Robin

    2002-01-01

    In this paper, the author wants to look at notions of "space", in order to examine why "space is in the midst of a renaissance" (Kaplan, 1996, p. 147), why it is, as it were, "back on the map". His intention here is to focus merely on one aspect of current changes in space-time-- the notion and actuality of "cyberspace", the most obvious…

  1. Definition of the metric on the space clos{sub ∅}(X) of closed subsets of a metric space X and properties of mappings with values in clos{sub ∅}(R{sup n})

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhukovskii, E S; Panasenko, E A

    2014-09-30

    The paper is concerned with the extension of tests for superpositional measurability, Filippov's implicit function lemma and the Scorza Dragoni property to set-valued (and, as a corollary, to single-valued) mappings that fail to satisfy the Carathéodory conditions (the upper Carathéodory conditions) and are not continuous (upper semicontinuous) in the phase variable. The corresponding results depend on the introduction of the space clos{sub ∅}(X) of all closed subsets (including the empty set) of an arbitrary metric space X; a metric on clos{sub ∅}(X) is proposed; the space clos{sub ∅}(X) is shown to be complete whenever the original space X is; a criterion for convergence of a sequence ismore » put forward; mappings with values in clos{sub ∅}(X) are studied. Some results on set-valued mappings satisfying the Carathéodory conditions and having compact values in R{sup n} are shown to hold for mappings with values in clos{sub ∅}(R{sup n}), measurable in the first argument, and continuous in the proposed metric in the second argument. Bibliography: 22 titles.« less

  2. Atlas-Independent, Electrophysiological Mapping of the Optimal Locus of Subthalamic Deep Brain Stimulation for the Motor Symptoms of Parkinson Disease.

    PubMed

    Conrad, Erin C; Mossner, James M; Chou, Kelvin L; Patil, Parag G

    2018-05-23

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) improves motor symptoms of Parkinson disease (PD). However, motor outcomes can be variable, perhaps due to inconsistent positioning of the active contact relative to an unknown optimal locus of stimulation. Here, we determine the optimal locus of STN stimulation in a geometrically unconstrained, mathematically precise, and atlas-independent manner, using Unified Parkinson Disease Rating Scale (UPDRS) motor outcomes and an electrophysiological neuronal stimulation model. In 20 patients with PD, we mapped motor improvement to active electrode location, relative to the individual, directly MRI-visualized STN. Our analysis included a novel, unconstrained and computational electrical-field model of neuronal activation to estimate the optimal locus of DBS. We mapped the optimal locus to a tightly defined ovoid region 0.49 mm lateral, 0.88 mm posterior, and 2.63 mm dorsal to the anatomical midpoint of the STN. On average, this locus is 11.75 lateral, 1.84 mm posterior, and 1.08 mm ventral to the mid-commissural point. Our novel, atlas-independent method reveals a single, ovoid optimal locus of stimulation in STN DBS for PD. The methodology, here applied to UPDRS and PD, is generalizable to atlas-independent mapping of other motor and non-motor effects of DBS. © 2018 S. Karger AG, Basel.

  3. Optimization and analysis of large chemical kinetic mechanisms using the solution mapping method - Combustion of methane

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.

    1992-01-01

    A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.

  4. LEAP into the Pfizer Global Virtual Library (PGVL) space: creation of readily synthesizable design ideas automatically.

    PubMed

    Hu, Qiyue; Peng, Zhengwei; Kostrowicki, Jaroslav; Kuki, Atsuo

    2011-01-01

    Pfizer Global Virtual Library (PGVL) of 10(13) readily synthesizable molecules offers a tremendous opportunity for lead optimization and scaffold hopping in drug discovery projects. However, mining into a chemical space of this size presents a challenge for the concomitant design informatics due to the fact that standard molecular similarity searches against a collection of explicit molecules cannot be utilized, since no chemical information system could create and manage more than 10(8) explicit molecules. Nevertheless, by accepting a tolerable level of false negatives in search results, we were able to bypass the need for full 10(13) enumeration and enabled the efficient similarity search and retrieval into this huge chemical space for practical usage by medicinal chemists. In this report, two search methods (LEAP1 and LEAP2) are presented. The first method uses PGVL reaction knowledge to disassemble the incoming search query molecule into a set of reactants and then uses reactant-level similarities into actual available starting materials to focus on a much smaller sub-region of the full virtual library compound space. This sub-region is then explicitly enumerated and searched via a standard similarity method using the original query molecule. The second method uses a fuzzy mapping onto candidate reactions and does not require exact disassembly of the incoming query molecule. Instead Basis Products (or capped reactants) are mapped into the query molecule and the resultant asymmetric similarity scores are used to prioritize the corresponding reactions and reactant sets. All sets of Basis Products are inherently indexed to specific reactions and specific starting materials. This again allows focusing on a much smaller sub-region for explicit enumeration and subsequent standard product-level similarity search. A set of validation studies were conducted. The results have shown that the level of false negatives for the disassembly-based method is acceptable when the query molecule can be recognized for exact disassembly, and the fuzzy reaction mapping method based on Basis Products has an even better performance in terms of lower false-negative rate because it is not limited by the requirement that the query molecule needs to be recognized by any disassembly algorithm. Both search methods have been implemented and accessed through a powerful desktop molecular design tool (see ref. (33) for details). The chapter will end with a comparison of published search methods against large virtual chemical space.

  5. Phased Restoration Plan for Degraded Land in North Korea by the Clustered Distribution Pattern of Suitable Afforestation Plants

    NASA Astrophysics Data System (ADS)

    Lee, S. G.; Lee, W. K.; Choi, H. A.; Yoo, H.; Song, C.; Son, Y.; Cha, S.; Bae, S. W.

    2017-12-01

    Degraded forest of North Korea (DPRK; The Democratic People's Republic of Koprea) is not only confined itself, it could cause serious problem in Korean Peninsula. The importance of restoration for degraded land has increased to improve an healthy ecosystem and solve a shortage of food in North Korea lately. On the other hand, although effort of North Korea government, degraded problem have consistently got worse. There are two main reasons it does not show effectively. The most critical one is absence of technique and information to restore, they concentrate urgent problem which is related to a poor food supply. The other problem is that they demand an efficiency plan in a short period. In these aspect, this study aims selecting suitable tree by spatial characteristics and establishing phased restoration plan to support policy decision about a degraded land in North Korea. The suitable tree for restoration is taken from references which involve natural plant distribution of North and South Korea (ROK; Republic of Korea). Optimal environmental predicted map is deducted from accumulated data of plant physiology whose endemic environmental optimal range individually. It is integrated a map by order of priorities that first is suitable tree species according to the region, and second is clustering distribution rate in a same species. The two types of priority is applied to weighting method. The research result shows that 23 afforestation species fit to restore, and lager distributed plants agree with the major species in Korean Peninsula. The integrated map considers weight of priorities, and it appears that Picea jezoensis is matched the widest. The integrated map shows a view of suitable restoration according to the space, but this is finespun to utilize in a policy. Therefore It provides 3 step plan to support policy decision by Block Statistics, as 12.5km (long-term general plan), 5km (medium-term detailed plan), 1km (short-term implementation plan).

  6. Use of an Annular Silicon Drift Detector (SDD) Versus a Conventional SDD Makes Phase Mapping a Practical Solution for Rare Earth Mineral Characterization.

    PubMed

    Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald

    2018-06-04

    A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.

  7. Nomads with Maps: Musical Connections in a Glocalized World

    ERIC Educational Resources Information Center

    Richerme, Lauren Kapalka

    2013-01-01

    This article presents the author's views on the concepts of the philosophers Deleuze and Guattari on striated (sedentary) space and smooth (mobile) space, asserting that "nomads" can move freely about their space. She relates these concepts to music education, incorporating Deleuze and Guattari's concept of mapping as it…

  8. Cognitive mapping in mental time travel and mental space navigation.

    PubMed

    Gauthier, Baptiste; van Wassenhove, Virginie

    2016-09-01

    The ability to imagine ourselves in the past, in the future or in different spatial locations suggests that the brain can generate cognitive maps that are independent of the experiential self in the here and now. Using three experiments, we asked to which extent Mental Time Travel (MTT; imagining the self in time) and Mental Space Navigation (MSN; imagining the self in space) shared similar cognitive operations. For this, participants judged the ordinality of real historical events in time and in space with respect to different mental perspectives: for instance, participants mentally projected themselves in Paris in nine years, and judged whether an event occurred before or after, or, east or west, of where they mentally stood. In all three experiments, symbolic distance effects in time and space dimensions were quantified using Reaction Times (RT) and Error Rates (ER). When self-projected, participants were slower and were less accurate (absolute distance effects); participants were also faster and more accurate when the spatial and temporal distances were further away from their mental viewpoint (relative distance effects). These effects show that MTT and MSN require egocentric mapping and that self-projection requires map transformations. Additionally, participants' performance was affected when self-projection was made in one dimension but judgements in another, revealing a competition between temporal and spatial mapping (Experiment 2 & 3). Altogether, our findings suggest that MTT and MSN are separately mapped although they require comparable allo- to ego-centric map conversion. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Delineation of Internal Mammary Nodal Target Volumes in Breast Cancer Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jethwa, Krishan R.; Kahila, Mohamed M.; Hunt, Katie N.

    Purpose: The optimal clinical target volume for internal mammary (IM) node irradiation is uncertain in an era of increasingly conformal volume-based treatment planning for breast cancer. We mapped the location of gross internal mammary lymph node (IMN) metastases to identify areas at highest risk of harboring occult disease. Methods and Materials: Patients with axial imaging of IMN disease were identified from a breast cancer registry. The IMN location was transferred onto the corresponding anatomic position on representative axial computed tomography images of a patient in the treatment position and compared with consensus group guidelines of IMN target delineation. Results: Themore » IMN location in 67 patients with 130 IMN metastases was mapped. The location was in the first 3 intercostal spaces in 102 of 130 nodal metastases (78%), whereas 18 of 130 IMNs (14%) were located caudal to the third intercostal space and 10 of 130 IMNs (8%) were located cranial to the first intercostal space. Of the 102 nodal metastases within the first 3 intercostal spaces, 54 (53%) were located within the Radiation Therapy Oncology Group consensus volume. Relative to the IM vessels, 19 nodal metastases (19%) were located medially with a mean distance of 2.2 mm (SD, 2.9 mm) whereas 29 (28%) were located laterally with a mean distance of 3.6 mm (SD, 2.5 mm). Ninety percent of lymph nodes within the first 3 intercostal spaces would have been encompassed within a 4-mm medial and lateral expansion on the IM vessels. Conclusions: In women with indications for elective IMN irradiation, a 4-mm medial and lateral expansion on the IM vessels may be appropriate. In women with known IMN involvement, cranial extension to the confluence of the IM vein with the brachiocephalic vein with or without caudal extension to the fourth or fifth interspace may be considered provided that normal tissue constraints are met.« less

  10. Laboratory multiple-crystal X-ray topography and reciprocal-space mapping of protein crystals: influence of impurities on crystal perfection

    NASA Technical Reports Server (NTRS)

    Hu, Z. W.; Thomas, B. R.; Chernov, A. A.

    2001-01-01

    Double-axis multiple-crystal X-ray topography, rocking-curve measurements and triple-axis reciprocal-space mapping have been combined to characterize protein crystals using a laboratory source. Crystals of lysozyme and lysozyme crystals doped with acetylated lysozyme impurities were examined. It was shown that the incorporation of acetylated lysozyme into crystals of lysozyme induces mosaic domains that are responsible for the broadening and/or splitting of rocking curves and diffraction-space maps along the direction normal to the reciprocal-lattice vector, while the overall elastic lattice strain of the impurity-doped crystals does not appear to be appreciable in high angular resolution reciprocal-space maps. Multiple-crystal monochromatic X-ray topography, which is highly sensitive to lattice distortions, was used to reveal the spatial distribution of mosaic domains in crystals which correlates with the diffraction features in reciprocal space. Discussions of the influence of acetylated lysozyme on crystal perfection are given in terms of our observations.

  11. Laboratory multiple-crystal X-ray topography and reciprocal-space mapping of protein crystals: influence of impurities on crystal perfection.

    PubMed

    Hu, Z W; Thomas, B R; Chernov, A A

    2001-06-01

    Double-axis multiple-crystal X-ray topography, rocking-curve measurements and triple-axis reciprocal-space mapping have been combined to characterize protein crystals using a laboratory source. Crystals of lysozyme and lysozyme crystals doped with acetylated lysozyme impurities were examined. It was shown that the incorporation of acetylated lysozyme into crystals of lysozyme induces mosaic domains that are responsible for the broadening and/or splitting of rocking curves and diffraction-space maps along the direction normal to the reciprocal-lattice vector, while the overall elastic lattice strain of the impurity-doped crystals does not appear to be appreciable in high angular resolution reciprocal-space maps. Multiple-crystal monochromatic X-ray topography, which is highly sensitive to lattice distortions, was used to reveal the spatial distribution of mosaic domains in crystals which correlates with the diffraction features in reciprocal space. Discussions of the influence of acetylated lysozyme on crystal perfection are given in terms of our observations.

  12. Asymptotically stable phase synchronization revealed by autoregressive circle maps

    NASA Astrophysics Data System (ADS)

    Drepper, F. R.

    2000-11-01

    A specially designed of nonlinear time series analysis is introduced based on phases, which are defined as polar angles in spaces spanned by a finite number of delayed coordinates. A canonical choice of the polar axis and a related implicit estimation scheme for the potentially underlying autoregressive circle map (next phase map) guarantee the invertibility of reconstructed phase space trajectories to the original coordinates. The resulting Fourier approximated, invertibility enforcing phase space map allows us to detect conditional asymptotic stability of coupled phases. This comparatively general synchronization criterion unites two existing generalizations of the old concept and can successfully be applied, e.g., to phases obtained from electrocardiogram and airflow recordings characterizing cardiorespiratory interaction.

  13. Mapping Queer Bioethics: Space, Place, and Locality.

    PubMed

    Wahlert, Lance

    2016-01-01

    This article, which introduces the special issue of the Journal of Homosexuality on "Mapping Queer Bioethics," begins by offering an overview of the analytical scope of the issue. Specifically, the first half of this essay raises critical questions central to the concept of a space-related queer bioethics, such as: How do we appreciate and understand the special needs of queer parties given the constraints of location, space, and geography? The second half of this article describes each feature article in the issue, as well as the subsequent special sections on the ethics of reading literal, health-related maps ("Cartographies") and scrutinizing the history of this journal as concerns LGBT health ("Mapping the Journal of Homosexuality").

  14. Parietal and superior frontal visuospatial maps activated by pointing and saccades

    PubMed Central

    Hagler, D.J.; Riecke, L.; Sereno, M.I.

    2009-01-01

    A recent study from our laboratory demonstrated that parietal cortex contains a map of visual space related to saccades and spatial attention and identified this area as the likely human homologue of the lateral intraparietal (LIP). A human homologue for the parietal reach region (PRR), thought to preferentially encode planned hand movements, has also been recently proposed. Both of these areas, originally identified in the macaque monkey, have been shown to encode space with eye-centered coordinates. Functional magnetic resonance imaging (fMRI) of humans was used to test the hypothesis that the putative human PRR contains a retinotopic map recruited by finger pointing but not saccades and to test more generally for differences in the visuospatial maps recruited by pointing and saccades. We identified multiple maps in both posterior parietal cortex and superior frontal cortex recruited for eye and hand movements, including maps not observed in previous mapping studies. Pointing and saccade maps were generally consistent within single subjects. We have developed new group analysis methods for phase-encoded data, which revealed subtle differences between pointing and saccades, including hemispheric asymmetries, but we did not find evidence of pointing-specific maps of visual space. PMID:17376706

  15. EnGeoMAP - geological applications within the EnMAP hyperspectral satellite science program

    NASA Astrophysics Data System (ADS)

    Boesche, N. K.; Mielke, C.; Rogass, C.; Guanter, L.

    2016-12-01

    Hyperspectral investigations from near field to space substantially contribute to geological exploration and mining monitoring of raw material and mineral deposits. Due to their spectral characteristics, large mineral occurrences and minefields can be identified from space and the spatial distribution of distinct proxy minerals be mapped. In the frame of the EnMAP hyperspectral satellite science program a mineral and elemental mapping tool was developed - the EnGeoMAP. It contains a basic mineral mapping and a rare earth element mapping approach. This study shows the performance of EnGeoMAP based on simulated EnMAP data of the rare earth element bearing Mountain Pass Carbonatite Complex, USA, and the Rodalquilar and Lomilla Calderas, Spain, which host the economically relevant gold-silver, lead-zinc-silver-gold and alunite deposits. The mountain pass image data was simulated on the basis of AVIRIS Next Generation images, while the Rodalquilar data is based on HyMap images. The EnGeoMAP - Base approach was applied to both images, while the mountain pass image data were additionally analysed using the EnGeoMAP - REE software tool. The results are mineral and elemental maps that serve as proxies for the regional lithology and deposit types. The validation of the maps is based on chemical analyses of field samples. Current airborne sensors meet the spatial and spectral requirements for detailed mineral mapping and future hyperspectral space borne missions will additionally provide a large coverage. For those hyperspectral missions, EnGeoMAP is a rapid data analysis tool that is provided to spectral geologists working in mineral exploration.

  16. Fast approximate delivery of fluence maps for IMRT and VMAT

    NASA Astrophysics Data System (ADS)

    Balvert, Marleen; Craft, David

    2017-02-01

    In this article we provide a method to generate the trade-off between delivery time and fluence map matching quality for dynamically delivered fluence maps. At the heart of our method lies a mathematical programming model that, for a given duration of delivery, optimizes leaf trajectories and dose rates such that the desired fluence map is reproduced as well as possible. We begin with the single fluence map case and then generalize the model and the solution technique to the delivery of sequential fluence maps. The resulting large-scale, non-convex optimization problem was solved using a heuristic approach. We test our method using a prostate case and a head and neck case, and present the resulting trade-off curves. Analysis of the leaf trajectories reveals that short time plans have larger leaf openings in general than longer delivery time plans. Our method allows one to explore the continuum of possibilities between coarse, large segment plans characteristic of direct aperture approaches and narrow field plans produced by sliding window approaches. Exposing this trade-off will allow for an informed choice between plan quality and solution time. Further research is required to speed up the optimization process to make this method clinically implementable.

  17. Automated Identification of Coronal Holes from Synoptic EUV Maps

    NASA Astrophysics Data System (ADS)

    Hamada, Amr; Asikainen, Timo; Virtanen, Ilpo; Mursula, Kalevi

    2018-04-01

    Coronal holes (CHs) are regions of open magnetic field lines in the solar corona and the source of the fast solar wind. Understanding the evolution of coronal holes is critical for solar magnetism as well as for accurate space weather forecasts. We study the extreme ultraviolet (EUV) synoptic maps at three wavelengths (195 Å/193 Å, 171 Å and 304 Å) measured by the Solar and Heliospheric Observatory/Extreme Ultraviolet Imaging Telescope (SOHO/EIT) and the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) instruments. The two datasets are first homogenized by scaling the SDO/AIA data to the SOHO/EIT level by means of histogram equalization. We then develop a novel automated method to identify CHs from these homogenized maps by determining the intensity threshold of CH regions separately for each synoptic map. This is done by identifying the best location and size of an image segment, which optimally contains portions of coronal holes and the surrounding quiet Sun allowing us to detect the momentary intensity threshold. Our method is thus able to adjust itself to the changing scale size of coronal holes and to temporally varying intensities. To make full use of the information in the three wavelengths we construct a composite CH distribution, which is more robust than distributions based on one wavelength. Using the composite CH dataset we discuss the temporal evolution of CHs during the Solar Cycles 23 and 24.

  18. Fixed points of contractive mappings in b-metric-like spaces.

    PubMed

    Hussain, Nawab; Roshan, Jamal Rezaei; Parvaneh, Vahid; Kadelburg, Zoran

    2014-01-01

    We discuss topological structure of b-metric-like spaces and demonstrate a fundamental lemma for the convergence of sequences. As an application we prove certain fixed point results in the setup of such spaces for different types of contractive mappings. Finally, some periodic point results in b-metric-like spaces are obtained. Two examples are presented in order to verify the effectiveness and applicability of our main results.

  19. SOM neural network fault diagnosis method of polymerization kettle equipment optimized by improved PSO algorithm.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Gao, Jie

    2014-01-01

    For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.

  20. IMAGING AND MEASUREMENT OF THE PRERETINAL SPACE IN VITREOMACULAR ADHESION AND VITREOMACULAR TRACTION BY A NEW SPECTRAL DOMAIN OPTICAL COHERENCE TOMOGRAPHY ANALYSIS.

    PubMed

    Stopa, Marcin; Marciniak, Elżbieta; Rakowicz, Piotr; Stankiewicz, Agnieszka; Marciniak, Tomasz; Dąbrowski, Adam

    2017-10-01

    To evaluate a new method for volumetric imaging of the preretinal space (also known as the subhyaloid, subcortical, or retrocortical space) and investigate differences in preretinal space volume in vitreomacular adhesion (VMA) and vitreomacular traction (VMT). Nine patients with VMA and 13 with VMT were prospectively evaluated. Automatic inner limiting membrane line segmentation, which exploits graph search theory implementation, and posterior cortical vitreous line segmentation were performed on 141 horizontal spectral domain optical coherence tomography B-scans per patient. Vertical distances (depths) between the posterior cortical vitreous and inner limiting membrane lines were calculated for each optical coherence tomography B-scan acquired. The derived distances were merged and visualized as a color depth map that represented the preretinal space between the posterior surface of the hyaloid and the anterior surface of the retina. The early treatment d retinopathy study macular map was overlaid onto final virtual maps, and preretinal space volumes were calculated for each early treatment diabetic retinopathy study map sector. Volumetric maps representing preretinal space volumes were created for each patient in the VMA and VMT groups. Preretinal space volumes were larger in all early treatment diabetic retinopathy study map macular regions in the VMT group compared with those in the VMA group. The differences reached statistical significance in all early treatment diabetic retinopathy study sectors, except for the superior outer macula and temporal outer macula where significance values were P = 0.05 and P = 0.08, respectively. Overall, the relative differences in preretinal space volumes between the VMT and VMA groups varied from 2.7 to 4.3 in inner regions and 1.8 to 2.9 in outer regions. Our study provides evidence of significant differences in preretinal space volume between eyes with VMA and those with VMT. This may be useful not only in the investigation of preretinal space properties in VMA and VMT, but also in other conditions, such as age-related macular degeneration, diabetic retinopathy, and central retinal vein occlusion.

  1. The Method of Multiple Spatial Planning Basic Map

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Fang, C.

    2018-04-01

    The "Provincial Space Plan Pilot Program" issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve "multi-compliance" of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton) to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  2. Local Free Space Mapping and Path Guidance,

    DTIC Science & Technology

    1987-03-01

    Free Space Mapping and Path Guidance 12. PERSONIAL UTI4OFS) William T. Cex and Nancy L. Campbell 1s. TYPE OF REPORT 13b. iME COVERED 14. DATE OF REPORT...84 JAN 52 A" 1OMON MAYBOfUSED NMlLEMIAUSTEO UNCLASSIFIED ALL OTHE EDTIN A’.SL Y2.7cesson For 7 *5~ IT D, TA ........... iCL ... . LOCAL FREE SPACE ... MAPPING AND PATH GUIDANCE By Distribuition/ Availabiliuy C0e William T. Gex and Nancy L. Campbell I Avail and/or Naval Ocean Systems Center ist speci1 l

  3. Mapping experiment with space station

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.

    1986-01-01

    Mapping of the Earth from space stations can be approached in two areas. One is to collect gravity data for defining topographic datum using Earth's gravity field in terms of spherical harmonics. The other is to search and explore techniques of mapping topography using either optical or radar images with or without reference to ground central points. Without ground control points, an integrated camera system can be designed. With ground control points, the position of the space station (camera station) can be precisely determined at any instant. Therefore, terrestrial topography can be precisely mapped either by conventional photogrammetric methods or by current digital technology of image correlation. For the mapping experiment, it is proposed to establish four ground points either in North America or Africa (including the Sahara desert). If this experiment should be successfully accomplished, it may also be applied to the defense charting systems.

  4. Space moving target detection using time domain feature

    NASA Astrophysics Data System (ADS)

    Wang, Min; Chen, Jin-yong; Gao, Feng; Zhao, Jin-yu

    2018-01-01

    The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects (target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10-5, which outperforms those of compared algorithms.

  5. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  6. Astronaut Kevin Chilton displays map of Scandinavia on flight deck

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Astronaut Kevin P. Chilton, pilot, displays a map of Scandinavia on the Space Shuttle Endeavour's flight deck. Large scale maps such as this were used by the crew to locate specific sites of interest to the Space Radar Laboratory scientists. The crew then photographed the sites at the same time as the radar in the payload bay imaged them.

  7. A high-density transcript linkage map with 1,845 expressed genes positioned by microarray-based Single Feature Polymorphisms (SFP) in Eucalyptus

    PubMed Central

    2011-01-01

    Background Technological advances are progressively increasing the application of genomics to a wider array of economically and ecologically important species. High-density maps enriched for transcribed genes facilitate the discovery of connections between genes and phenotypes. We report the construction of a high-density linkage map of expressed genes for the heterozygous genome of Eucalyptus using Single Feature Polymorphism (SFP) markers. Results SFP discovery and mapping was achieved using pseudo-testcross screening and selective mapping to simultaneously optimize linkage mapping and microarray costs. SFP genotyping was carried out by hybridizing complementary RNA prepared from 4.5 year-old trees xylem to an SFP array containing 103,000 25-mer oligonucleotide probes representing 20,726 unigenes derived from a modest size expressed sequence tags collection. An SFP-mapping microarray with 43,777 selected candidate SFP probes representing 15,698 genes was subsequently designed and used to genotype SFPs in a larger subset of the segregating population drawn by selective mapping. A total of 1,845 genes were mapped, with 884 of them ordered with high likelihood support on a framework map anchored to 180 microsatellites with average density of 1.2 cM. Using more probes per unigene increased by two-fold the likelihood of detecting segregating SFPs eventually resulting in more genes mapped. In silico validation showed that 87% of the SFPs map to the expected location on the 4.5X draft sequence of the Eucalyptus grandis genome. Conclusions The Eucalyptus 1,845 gene map is the most highly enriched map for transcriptional information for any forest tree species to date. It represents a major improvement on the number of genes previously positioned on Eucalyptus maps and provides an initial glimpse at the gene space for this global tree genome. A general protocol is proposed to build high-density transcript linkage maps in less characterized plant species by SFP genotyping with a concurrent objective of reducing microarray costs. HIgh-density gene-rich maps represent a powerful resource to assist gene discovery endeavors when used in combination with QTL and association mapping and should be especially valuable to assist the assembly of reference genome sequences soon to come for several plant and animal species. PMID:21492453

  8. Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space

    DTIC Science & Technology

    2015-05-01

    ARL-TR-7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...

  9. Improving Operational Effectiveness of Tactical Long Endurance Unmanned Aerial Systems (TALEUAS) by Utilizing Solar Power

    DTIC Science & Technology

    2014-06-01

    Speed xiii TEK Total Energy Compensated TSP traveling salesman problem UAV unmanned aerial vehicle UDP user datagram protocol UKF unscented...discretized map, and use the map to optimally solve the navigation task. The optimal navigation solution utilizes the well-known “ travelling salesman problem ...2 C. FORMULATION OF THE PROBLEM .................................................. 3 D

  10. Short-term cascaded hydroelectric system scheduling based on chaotic particle swarm optimization using improved logistic map

    NASA Astrophysics Data System (ADS)

    He, Yaoyao; Yang, Shanlin; Xu, Qifa

    2013-07-01

    In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.

  11. The topographic grain concept in DEM-based geomorphometric mapping

    NASA Astrophysics Data System (ADS)

    Józsa, Edina

    2016-04-01

    A common drawback of geomorphological analyses based on digital elevation datasets is the definition of search window size for the derivation of morphometric variables. The fixed-size neighbourhood determines the scale of the analysis and mapping, which can lead to the generalization of smaller surface details or the elimination of larger landform elements. The methods of DEM-based geomorphometric mapping are constantly developing into the direction of multi-scale landform delineation, but the optimal threshold for search window size is still a limiting factor. A possible way to determine the suitable value for the parameter is to consider the topographic grain principle (Wood, W. F. - Snell, J. B. 1960, Pike, R. J. et al. 1989). The calculation is implemented as a bash shell script for GRASS GIS to determine the optimal threshold for the r.geomorphon module. The approach relies on the potential of the topographic grain to detect the characteristic local ridgeline-to-channel spacing. By calculating the relative relief values with nested neighbourhood matrices it is possible to define a break-point where the increase rate of local relief encountered by the sample is significantly reducing. The geomorphons approach (Jasiewicz, J. - Stepinski, T. F. 2013) is a cell-based DEM classification method for the identification of landform elements at a broad range of scales by using line-of-sight technique. The landforms larger than the maximum lookup distance are broken down to smaller elements therefore the threshold needs to be set for a relatively large value. On the contrary, the computational requirements and the size of the study sites determine the upper limit for the value. Therefore the aim was to create a tool that would help to determine the optimal parameter for r.geomorphon tool. As a result it would be possible to produce more objective and consistent maps with achieving the full efficiency of this mapping technique. For the thorough analysis on the applicability of the proposed methodology a test site covering hilly and low mountainous regions in Southern Transdanubia, Hungary was chosen. As elevation dataset the freely available SRTM DSM with 1 arc-second resolution was used, after implementing necessary error correction. Based on the delineated landform elements and morphometric variables the physiographic characteristics of the landscape could be analysed and compared with the existing expert-based map of microregions. References: Wood, W. F. and J. B. Snell (1960). A quantitative system for classifying landforms. - Technical Report EP-124. U.S. Army Quartermaster Research and Engineering Center, Natick, 20 pp. Pike, R. J., et al. (1989). Topographic grain automated from digital elevation models. - Proceedings, Auto-Carto 9, ASPRS/ACSM Baltimore MD, 2-7 April 1989. Jasiewicz, J. and T. F. Stepinski (2013). Geomorphons - a pattern recognition approach to classification and mapping of landforms. - Geomorphology 182(0): 147-156.

  12. KSC-2013-3238

    NASA Image and Video Library

    2013-08-09

    CAPE CANAVERAL, Fla. – As seen on Google Maps, a Space Shuttle Main Engine, or SSME, stands inside the Engine Shop at Orbiter Processing Facility 3 at NASA's Kennedy Space Center. Each orbiter used three of the engines during launch and ascent into orbit. The engines burn super-cold liquid hydrogen and liquid oxygen and each one produces 155,000 pounds of thrust. The engines, known in the industry as RS-25s, could be reused on multiple shuttle missions. They will be used again later this decade for NASA's Space Launch System rocket. Google precisely mapped the space center and some of its historical facilities for the company's map page. The work allows Internet users to see inside buildings at Kennedy as they were used during the space shuttle era. Photo credit: Google/Wendy Wang

  13. Existence of Lipschitz selections of the Steiner map

    NASA Astrophysics Data System (ADS)

    Bednov, B. B.; Borodin, P. A.; Chesnokova, K. V.

    2018-02-01

    This paper is concerned with the problem of the existence of Lipschitz selections of the Steiner map {St}_n, which associates with n points of a Banach space X the set of their Steiner points. The answer to this problem depends on the geometric properties of the unit sphere S(X) of X, its dimension, and the number n. For n≥slant 4 general conditions are obtained on the space X under which {St}_n admits no Lipschitz selection. When X is finite dimensional it is shown that, if n≥slant 4 is even, the map {St}_n has a Lipschitz selection if and only if S(X) is a finite polytope; this is not true if n≥slant 3 is odd. For n=3 the (single-valued) map {St}_3 is shown to be Lipschitz continuous in any smooth strictly-convex two-dimensional space; this ceases to be true in three-dimensional spaces. Bibliography: 21 titles.

  14. Study on the mapping of dark matter clustering from real space to redshift space

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Song, Yong-Seon

    2016-08-01

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown in this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ``one-point" FoG term being independent of the separation vector between two different points, and 2) the ``correlated" FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k~ 0.2 Mpc-1, considering the resolution of future experiments.

  15. Representation of DNA sequences in genetic codon context with applications in exon and intron prediction.

    PubMed

    Yin, Changchuan

    2015-04-01

    To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.

  16. HYTHIRM Radiance Modeling and Image Analyses in Support of STS-119, STS-125 and STS-128 Space Shuttle Hypersonic Re-entries

    NASA Technical Reports Server (NTRS)

    Gibson, David M.; Spisz, Thomas S.; Taylor, Jeff C.; Zalameda, Joseph N.; Horvath, Thomas J.; Tomek, Deborah M.; Tietjen, Alan B.; Tack, Steve; Bush, Brett C.

    2010-01-01

    We provide the first geometrically accurate (i.e., 3-D) temperature maps of the entire windward surface of the Space Shuttle during hypersonic reentry. To accomplish this task we began with estimated surface temperatures derived from CFD models at integral high Mach numbers and used them, the Shuttle's surface properties and reasonable estimates of the sensor-to-target geometry to predict the emitted spectral radiance from the surface (in units of W sr-1 m-2 nm-1). These data were converted to sensor counts using properties of the sensor (e.g. aperture, spectral band, and various efficiencies), the expected background, and the atmosphere transmission to inform the optimal settings for the near-infrared and midwave IR cameras on the Cast Glance aircraft. Once these data were collected, calibrated, edited, registered and co-added we formed both 2-D maps of the scene in the above units and 3-D maps of the bottom surface in temperature that could be compared with not only the initial inputs but also thermocouple data from the Shuttle itself. The 3-D temperature mapping process was based on the initial radiance modeling process. Here temperatures were guessed for each node in a well-resolved 3-D framework, a radiance model was produced and compared to the processed imagery, and corrections to the temperature were estimated until the iterative process converged. This process did very well in characterizing the temperature structure of the large asymmetric boundary layer transition the covered much of the starboard bottom surface of STS-119 Discovery. Both internally estimated accuracies and differences with CFD models and thermocouple measurements are at most a few percent. The technique did less well characterizing the temperature structure of the turbulent wedge behind the trip due to limitations in understanding the true sensor resolution. (Note: Those less inclined to read the entire paper are encouraged to read an Executive Summary provided at the end.)

  17. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  18. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  19. A novel model of motor learning capable of developing an optimal movement control law online from scratch.

    PubMed

    Shimansky, Yury P; Kang, Tao; He, Jiping

    2004-02-01

    A computational model of a learning system (LS) is described that acquires knowledge and skill necessary for optimal control of a multisegmental limb dynamics (controlled object or CO), starting from "knowing" only the dimensionality of the object's state space. It is based on an optimal control problem setup different from that of reinforcement learning. The LS solves the optimal control problem online while practicing the manipulation of CO. The system's functional architecture comprises several adaptive components, each of which incorporates a number of mapping functions approximated based on artificial neural nets. Besides the internal model of the CO's dynamics and adaptive controller that computes the control law, the LS includes a new type of internal model, the minimal cost (IM(mc)) of moving the controlled object between a pair of states. That internal model appears critical for the LS's capacity to develop an optimal movement trajectory. The IM(mc) interacts with the adaptive controller in a cooperative manner. The controller provides an initial approximation of an optimal control action, which is further optimized in real time based on the IM(mc). The IM(mc) in turn provides information for updating the controller. The LS's performance was tested on the task of center-out reaching to eight randomly selected targets with a 2DOF limb model. The LS reached an optimal level of performance in a few tens of trials. It also quickly adapted to movement perturbations produced by two different types of external force field. The results suggest that the proposed design of a self-optimized control system can serve as a basis for the modeling of motor learning that includes the formation and adaptive modification of the plan of a goal-directed movement.

  20. TU-G-BRB-03: Iterative Optimization of Normalized Transmission Maps for IMRT Using Arbitrary Beam Profiles.

    PubMed

    Choi, K; Suh, T; Xing, L

    2012-06-01

    Newly available flattening filter free (FFF) beam increases the dose rate by 3∼6 times at the central axis. In reality, even flattening filtered beam is not perfectly flat. In addition, the beam profiles across different fields may not have the same amplitude. The existing inverse planning formalism based on the total-variation of intensity (or fluence) map cannot consider these properties of beam profiles. The purpose of this work is to develop a novel dose optimization scheme with incorporation of the inherent beam profiles to maximally utilize the efficacy of arbitrary beam profiles while preserving the convexity of the optimization problem. To increase the accuracy of the problem formalism, we decompose the fluence map as an elementwise multiplication of the inherent beam profile and a normalized transmission map (NTM). Instead of attempting to optimize the fluence maps directly, we optimize the NTMs and beam profiles separately. A least-squares problem constrained by total-variation of NTMs is developed to derive the optimal fluence maps that balances the dose conformality and FFF beam delivery efficiency. With the resultant NTMs, we find beam profiles to renormalized NTMs. The proposed method iteratively optimizes and renormalizes NTMs in a closed loop manner. The advantage of the proposed method is demonstrated by using a head-neck case with flat beam profiles and a prostate case with non-flat beam profiles. The obtained NTMs achieve more conformal dose distribution while preserving piecewise constancy compared to the existing solution. The proposed formalism has two major advantages over the conventional inverse planning schemes: (1) it provides a unified framework for inverse planning with beams of arbitrary fluence profiles, including treatment with beams of mixed fluence profiles; (2) the use of total-variation constraints on NTMs allows us to optimally balance the dose confromality and deliverability for a given beam configuration. This project was supported in part by grants from the National Science Foundation (0854492), National Cancer Institute (1R01 CA104205), and Leading Foreign Research Institute Recruitment Program by the Korean Ministry of Education, Science and Technology (K20901000001-09E0100-00110). To the authors' best knowledgement, there is no conflict interest. © 2012 American Association of Physicists in Medicine.

  1. Fixed Points of Contractive Mappings in b-Metric-Like Spaces

    PubMed Central

    Hussain, Nawab; Roshan, Jamal Rezaei

    2014-01-01

    We discuss topological structure of b-metric-like spaces and demonstrate a fundamental lemma for the convergence of sequences. As an application we prove certain fixed point results in the setup of such spaces for different types of contractive mappings. Finally, some periodic point results in b-metric-like spaces are obtained. Two examples are presented in order to verify the effectiveness and applicability of our main results. PMID:25143980

  2. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    NASA Astrophysics Data System (ADS)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  3. Dynamic positioning configuration and its first-order optimization

    NASA Astrophysics Data System (ADS)

    Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu

    2014-02-01

    Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.

  4. Metabolite-cycled density-weighted concentric rings k-space trajectory (DW-CRT) enables high-resolution 1 H magnetic resonance spectroscopic imaging at 3-Tesla.

    PubMed

    Steel, Adam; Chiew, Mark; Jezzard, Peter; Voets, Natalie L; Plaha, Puneet; Thomas, Michael Albert; Stagg, Charlotte J; Emir, Uzay E

    2018-05-17

    Magnetic resonance spectroscopic imaging (MRSI) is a promising technique in both experimental and clinical settings. However, to date, MRSI has been hampered by prohibitively long acquisition times and artifacts caused by subject motion and hardware-related frequency drift. In the present study, we demonstrate that density weighted concentric ring trajectory (DW-CRT) k-space sampling in combination with semi-LASER excitation and metabolite-cycling enables high-resolution MRSI data to be rapidly acquired at 3 Tesla. Single-slice full-intensity MRSI data (short echo time (TE) semi-LASER TE = 32 ms) were acquired from 6 healthy volunteers with an in-plane resolution of 5 × 5 mm in 13 min 30 sec using this approach. Using LCModel analysis, we found that the acquired spectra allowed for the mapping of total N-acetylaspartate (median Cramer-Rao Lower Bound [CRLB] = 3%), glutamate+glutamine (8%), and glutathione (13%). In addition, we demonstrate potential clinical utility of this technique by optimizing the TE to detect 2-hydroxyglutarate (long TE semi-LASER, TE = 110 ms), to produce relevant high-resolution metabolite maps of grade III IDH-mutant oligodendroglioma in a single patient. This study demonstrates the potential utility of MRSI in the clinical setting at 3 Tesla.

  5. Comparative studies of the interaction between the Sun and planetary near space environments with the Solar Connections Observatory for Planetary Environments (SCOPE)

    NASA Astrophysics Data System (ADS)

    Harris, W. M.; Scope Team

    2003-04-01

    The Solar Connections Observatory for Planetary Environments (SCOPE) is a remote sensing facility designed to probe the nature of the relationship of planetary bodies and the local interstellar medium to the solar wind and UV-EUV radiation field. In particular, the SCOPE program seeks to comparatively monitor the near space environments and thermosphere/ionospheres of planets, planetesimals, and satellites under different magnetospheric configurations and as a function of heliocentric distance and solar activity. In addition, SCOPE will include the Earth as a science target, providing new remote observations of auroral and upper atmospheric phenomena and utilizing it as baseline for direct comparison with other planetary bodies. The observatory will be scheduled into discrete campaigns interleaving Target-Terrestrial observations to provide a comparative annual activity map over the course of a solar half cycle. The SCOPE science instrument consists of binocular UV (115-310 nm) and EUV (500-120 nm) telescopes and a side channel sky-mapping interferometer on a spacecraft stationed in a remote orbit. The telescope instruments provide a mix of capabilities including high spatial resolution narrow band imaging, moderate resolution broadband spectro-imaging, and high-resolution line spectroscopy. The side channel instrument will be optimized for line profile measurements of diagnostic terrestrial upper atmospheric, comet, interplanetary, and interstellar extended emissions.

  6. Karma or Immortality: Can Religion Influence Space-Time Mappings?

    PubMed

    Li, Heng; Cao, Yu

    2018-04-01

    People implicitly associate the "past" and "future" with "front" and "back" in their minds according to their cultural attitudes toward time. As the temporal focus hypothesis (TFH) proposes, future-oriented people tend to think about time according to the future-in-front mapping, whereas past-oriented people tend to think about time according to the past-in-front mapping (de la Fuente, Santiago, Román, Dumitrache, & Casasanto, 2014). Whereas previous studies have demonstrated that culture exerts an important influence on people's implicit spatializations of time, we focus specifically on religion, a prominent layer of culture, as potential additional influence on space-time mappings. In Experiment 1 and 2, we observed a difference between the two religious groups, with Buddhists being more past-focused and more frequently conceptualizing the past as ahead of them and the future as behind them, and Taoists more future-focused and exhibiting the opposite space-time mapping. In Experiment 3, we administered a religion prime, in which Buddhists were randomly assigned to visualize the picture of the Buddhas of the Past (Buddha Dipamkara) or the Future (Buddha Maitreya). Results showed that the pictorial icon of Dipamkara increased participants' tendency to conceptualize the past as in front of them. In contrast, the pictorial icon of Maitreya caused a dramatic increase in the rate of future-in-front responses. In Experiment 4, the causal effect of religion on implicit space-time mappings was replicated in atheists. Taken together, these findings provide converging evidence for the hypothesized causal role of religion for temporal focus in determining space-time mappings. Copyright © 2018 Cognitive Science Society, Inc.

  7. Determining Optimal Microwave Antigen Retrieval Conditions for Microtubule-Associated Protein 2 Immunohistochemistry in the Guinea Pig Brain

    DTIC Science & Technology

    2002-12-01

    sections of formalin-fixed guinea pig brains using different MAP-2 monoclonal antibodies. Brain sections were boiled in sodium citrate, citric acid...citric acid solution at pH 6.0 is the optimal microwave-assisted AR method for immunolabeling MAP-2 in formalin-fixed, paraffin-processed guinea pig brain...studies on archival guinea pig brain paraffin blocks, ultimately relaxing the use of additional animals to evaluate changes in MAP-2 expression between chemical warfare nerve agent-treated and control samples.

  8. The Role of the Photogeologic Mapping in the Morocco 2013 Mars Analog Field Simulation (Austrian Space Forum)

    NASA Astrophysics Data System (ADS)

    Losiak, Anna; Orgel, Csilla; Moser, Linda; MacArthur, Jane; Gołębiowska, Izabela; Wittek, Steffen; Boyd, Andrea; Achorner, Isabella; Rampey, Mike; Bartenstein, Thomas; Jones, Natalie; Luger, Ulrich; Sans, Alejandra; Hettrich, Sebastian

    2013-04-01

    The MARS2013 mission: The Austrian Space Forum together with multiple scientific partners will conduct a Mars analog field simulation. The project takes place between 1st and 28th of February 2013 in the northern Sahara near Erfoud. During the simulation a field crew (consisting of suited analog astronauts and a support team) will conduct several experiments while being managed by the Mission Support Center (MSC) located in Innsbruck, Austria. The aim of the project is to advance preparation of the future human Mars missions by testing: 1) the mission design with regard to operational and engineering challenges (e.g., how to work efficiently with introduced time delay in communication between field team and MSC), 2) scientific instruments (e.g., rovers) and 3) human performance in conditions analogous to those that will be encountered on Mars. The Role of Geological Mapping: Remote Science Support team (RSS) is responsible for processing science data obtained in the field. The RSS is also in charge of preparing a set of maps to enable planning activities of the mission (including the development of traverses) [1, 2]. The usage of those maps will increase the time-cost efficiency of the entire mission. The RSS team members do not have any prior knowledge about the area where the simulation is taking place and the analysis is fully based on remote sensing satellite data (Landsat, GoogleEarth) and a digital elevation model (ASTER GDEM)from the orbital data. The maps design: The set of maps (covering area 5 km X 5 km centered on the Mission Base Camp) was designed to simplify the process of site selection for the daily traverse planning. Additionally, the maps will help to accommodate the need of the field crew for the increased autonomy in the decision making process, forced by the induced time delay between MSC and "Mars". The set of provided maps should allow the field team to orientate and navigate in the explored areas as well as make informed decisions about choosing the best alternative traverses if the ones suggested by the flight planning team based on satellite data turn out to be impossible. The set of maps includes: A "geological map" prepared following suggestions of [3]. A set of experiment "suitability maps", one for every experiment, assessing the suitability of the area for an experiment. E.g., if a rover cannot move on surfaces that have an inclination larger than 5° and/or are covered with rocks larger than 15 cm in diameter, than the areas likely to have such conditions will be marked as not suitable for this experiment. "Danger" map - showing locations of all potentially dangerous places e.g., cliffs. "Mobility" map - with information important for estimating astronauts' mobility. During the mission maps will be updated on a daily basis, based on the observations made in the field. In this way quality of the maps (and predictions based on them) will be gradually improved. Acknowledges: We thank all people involved in the MARS2013 mission, especially Dr. Gernot Grömer, the President of Austrian Space Forum, MARS2013 program officer & expedition lead. References: [1] Sans Fuentes S.A. 2012. Human-Robotic Mars Science Operations: Target Selection Optimization via Traverse and Science Planning. (M.S. thesis). U. of Innsbruck. [2] Hettich S. 2012. Human-Robotic Mars Science Operations: Itinerary Optimisation for Surface Activities (M.S. thesis). U. of Innsbruck. [3] Skinner J.A.Jr., Fortezzo C.M. 2011. Acta Astronautica. http://dx.doi.org/10.1016/j.actaastro.2011.11.011.

  9. Design space construction of multiple dose-strength tablets utilizing bayesian estimation based on one set of design-of-experiments.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-01-01

    Design spaces for multiple dose strengths of tablets were constructed using a Bayesian estimation method with one set of design of experiments (DoE) of only the highest dose-strength tablet. The lubricant blending process for theophylline tablets with dose strengths of 100, 50, and 25 mg is used as a model manufacturing process in order to construct design spaces. The DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) for theophylline 100-mg tablet. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) of the 100-mg tablet were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. Three experiments under an optimal condition and two experiments under other conditions were performed using 50- and 25-mg tablets, respectively. The response surfaces of the highest-strength tablet were corrected to those of the lower-strength tablets by Bayesian estimation using the manufacturing data of the lower-strength tablets. Experiments under three additional sets of conditions of lower-strength tablets showed that the corrected design space made it possible to predict the quality of lower-strength tablets more precisely than the design space of the highest-strength tablet. This approach is useful for constructing design spaces of tablets with multiple strengths.

  10. Mapping of High Value Crops Through AN Object-Based Svm Model Using LIDAR Data and Orthophoto in Agusan del Norte Philippines

    NASA Astrophysics Data System (ADS)

    Candare, Rudolph Joshua; Japitana, Michelle; Cubillas, James Earl; Ramirez, Cherry Bryan

    2016-06-01

    This research describes the methods involved in the mapping of different high value crops in Agusan del Norte Philippines using LiDAR. This project is part of the Phil-LiDAR 2 Program which aims to conduct a nationwide resource assessment using LiDAR. Because of the high resolution data involved, the methodology described here utilizes object-based image analysis and the use of optimal features from LiDAR data and Orthophoto. Object-based classification was primarily done by developing rule-sets in eCognition. Several features from the LiDAR data and Orthophotos were used in the development of rule-sets for classification. Generally, classes of objects can't be separated by simple thresholds from different features making it difficult to develop a rule-set. To resolve this problem, the image-objects were subjected to Support Vector Machine learning. SVMs have gained popularity because of their ability to generalize well given a limited number of training samples. However, SVMs also suffer from parameter assignment issues that can significantly affect the classification results. More specifically, the regularization parameter C in linear SVM has to be optimized through cross validation to increase the overall accuracy. After performing the segmentation in eCognition, the optimization procedure as well as the extraction of the equations of the hyper-planes was done in Matlab. The learned hyper-planes separating one class from another in the multi-dimensional feature-space can be thought of as super-features which were then used in developing the classifier rule set in eCognition. In this study, we report an overall classification accuracy of greater than 90% in different areas.

  11. GEO Carbon and GHG Initiative Task 3: Optimizing in-situ measurements of essential carbon cycle variables across observational networks

    NASA Astrophysics Data System (ADS)

    Durden, D.; Muraoka, H.; Scholes, R. J.; Kim, D. G.; Loescher, H. W.; Bombelli, A.

    2017-12-01

    The development of an integrated global carbon cycle observation system to monitor changes in the carbon cycle, and ultimately the climate system, across the globe is of crucial importance in the 21stcentury. This system should be comprised of space and ground-based observations, in concert with modelling and analysis, to produce more robust budgets of carbon and other greenhouse gases (GHGs). A global initiative, the GEO Carbon and GHG Initiative, is working within the framework of Group on Earth Observations (GEO) to promote interoperability and provide integration across different parts of the system, particularly at domain interfaces. Thus, optimizing the efforts of existing networks and initiatives to reduce uncertainties in budgets of carbon and other GHGs. This is a very ambitious undertaking; therefore, the initiative is separated into tasks to provide actionable objectives. Task 3 focuses on the optimization of in-situ observational networks. The main objective of Task 3 is to develop and implement a procedure for enhancing and refining the observation system for identified essential carbon cycle variables (ECVs) that meets user-defined specifications at minimum total cost. This work focuses on the outline of the implementation plan, which includes a review of essential carbon cycle variables and observation technologies, mapping the ECVs performance, and analyzing gaps and opportunities in order to design an improved observing system. A description of the gap analysis of in-situ observations that will begin in the terrestrial domain to address issues of missing coordination and large spatial gaps, then extend to ocean and atmospheric observations in the future, will be outlined as the subsequent step to landscape mapping of existing observational networks.

  12. Remote science support during MARS2013: testing a map-based system of data processing and utilization for future long-duration planetary missions.

    PubMed

    Losiak, Anna; Gołębiowska, Izabela; Orgel, Csilla; Moser, Linda; MacArthur, Jane; Boyd, Andrea; Hettrich, Sebastian; Jones, Natalie; Groemer, Gernot

    2014-05-01

    MARS2013 was an integrated Mars analog field simulation in eastern Morocco performed by the Austrian Space Forum between February 1 and 28, 2013. The purpose of this paper is to discuss the system of data processing and utilization adopted by the Remote Science Support (RSS) team during this mission. The RSS team procedures were designed to optimize operational efficiency of the Flightplan, field crew, and RSS teams during a long-term analog mission with an introduced 10 min time delay in communication between "Mars" and Earth. The RSS workflow was centered on a single-file, easy-to-use, spatially referenced database that included all the basic information about the conditions at the site of study, as well as all previous and planned activities. This database was prepared in Google Earth software. The lessons learned from MARS2013 RSS team operations are as follows: (1) using a spatially referenced database is an efficient way of data processing and data utilization in a long-term analog mission with a large amount of data to be handled, (2) mission planning based on iterations can be efficiently supported by preparing suitability maps, (3) the process of designing cartographical products should start early in the planning stages of a mission and involve representatives of all teams, (4) all team members should be trained in usage of cartographical products, (5) technical problems (e.g., usage of a geological map while wearing a space suit) should be taken into account when planning a work flow for geological exploration, (6) a system that helps the astronauts to efficiently orient themselves in the field should be designed as part of future analog studies.

  13. Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore

    USGS Publications Warehouse

    Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.

    2013-01-01

    The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.

  14. The Research of Feature Extraction Method of Liver Pathological Image Based on Multispatial Mapping and Statistical Properties

    PubMed Central

    Liu, Huiling; Xia, Bingbing; Yi, Dehui

    2016-01-01

    We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407

  15. A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan

    2009-01-01

    Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.

  16. Using evolutionary computation to optimize an SVM used in detecting buried objects in FLIR imagery

    NASA Astrophysics Data System (ADS)

    Paino, Alex; Popescu, Mihail; Keller, James M.; Stone, Kevin

    2013-06-01

    In this paper we describe an approach for optimizing the parameters of a Support Vector Machine (SVM) as part of an algorithm used to detect buried objects in forward looking infrared (FLIR) imagery captured by a camera installed on a moving vehicle. The overall algorithm consists of a spot-finding procedure (to look for potential targets) followed by the extraction of several features from the neighborhood of each spot. The features include local binary pattern (LBP) and histogram of oriented gradients (HOG) as these are good at detecting texture classes. Finally, we project and sum each hit into UTM space along with its confidence value (obtained from the SVM), producing a confidence map for ROC analysis. In this work, we use an Evolutionary Computation Algorithm (ECA) to optimize various parameters involved in the system, such as the combination of features used, parameters on the Canny edge detector, the SVM kernel, and various HOG and LBP parameters. To validate our approach, we compare results obtained from an SVM using parameters obtained through our ECA technique with those previously selected by hand through several iterations of "guess and check".

  17. Optimization of Dynamic Aperture of PEP-X Baseline Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Min-Huey; /SLAC; Cai, Yunhai

    2010-08-23

    SLAC is developing a long-range plan to transfer the evolving scientific programs at SSRL from the SPEAR3 light source to a much higher performing photon source. Storage ring design is one of the possibilities that would be housed in the 2.2-km PEP-II tunnel. The design goal of PEPX storage ring is to approach an optimal light source design with horizontal emittance less than 100 pm and vertical emittance of 8 pm to reach the diffraction limit of 1-{angstrom} x-ray. The low emittance design requires a lattice with strong focusing leading to high natural chromaticity and therefore to strong sextupoles. Themore » latter caused reduction of dynamic aperture. The dynamic aperture requirement for horizontal injection at injection point is about 10 mm. In order to achieve the desired dynamic aperture the transverse non-linearity of PEP-X is studied. The program LEGO is used to simulate the particle motion. The technique of frequency map is used to analyze the nonlinear behavior. The effect of the non-linearity is tried to minimize at the given constrains of limited space. The details and results of dynamic aperture optimization are discussed in this paper.« less

  18. Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.

    PubMed

    Lin, Lanny; Goodrich, Michael A

    2014-12-01

    During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.

  19. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  20. Optimal Path Planning Program for Autonomous Speed Sprayer in Orchard Using Order-Picking Algorithm

    NASA Astrophysics Data System (ADS)

    Park, T. S.; Park, S. J.; Hwang, K. Y.; Cho, S. I.

    This study was conducted to develop a software program which computes optimal path for autonomous navigation in orchard, especially for speed sprayer. Possibilities of autonomous navigation in orchard were shown by other researches which have minimized distance error between planned path and performed path. But, research of planning an optimal path for speed sprayer in orchard is hardly founded. In this study, a digital map and a database for orchard which contains GPS coordinate information (coordinates of trees and boundary of orchard) and entity information (heights and widths of trees, radius of main stem of trees, disease of trees) was designed. An orderpicking algorithm which has been used for management of warehouse was used to calculate optimum path based on the digital map. Database for digital map was created by using Microsoft Access and graphic interface for database was made by using Microsoft Visual C++ 6.0. It was possible to search and display information about boundary of an orchard, locations of trees, daily plan for scattering chemicals and plan optimal path on different orchard based on digital map, on each circumstance (starting speed sprayer in different location, scattering chemicals for only selected trees).

  1. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  2. Low-dose cone-beam CT via raw counts domain low-signal correction schemes: Performance assessment and task-based parameter optimization (Part II. Task-based parameter optimization).

    PubMed

    Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong

    2018-05-01

    Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.

  3. A crystallographic investigation of GaN nanostructures by reciprocal space mapping in a grazing incidence geometry.

    PubMed

    Lee, Sanghwa; Sohn, Yuri; Kim, Chinkyo; Lee, Dong Ryeol; Lee, Hyun-Hwi

    2009-05-27

    Reciprocal space mapping with a two-dimensional (2D) area detector in a grazing incidence geometry was applied to determine crystallographic orientations of GaN nanostructures epitaxially grown on a sapphire substrate. By using both unprojected and projected reciprocal space mapping with a proper coordinate transformation, the crystallographic orientations of GaN nanostructures with respect to that of a substrate were unambiguously determined. In particular, the legs of multipods in the wurtzite phase were found to preferentially nucleate on the sides of tetrahedral cores in the zinc blende phase.

  4. Noise pollution mapping approach and accuracy on landscape scales.

    PubMed

    Iglesias Merchan, Carlos; Diaz-Balteiro, Luis

    2013-04-01

    Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Microgravity

    NASA Image and Video Library

    1998-06-16

    Eddie Snell (standing), Post-Doctoral Fellow the National Research Council (NRC),and Marc Pusey of Marshall Space Flight Center (MSFC) use a reciprocal space mapping diffractometer for marcromolecular crystal quality studies. The diffractometer is used in mapping the structure of marcromolecules such as proteins to determine their structure and thus understand how they function with other proteins in the body. This is one of several analytical tools used on proteins crystalized on Earth and in space experiments. Photo credit: NASA/Marshall Space Flight Center (MSFC)

  6. Energy content of stormtime ring current from phase space mapping simulations

    NASA Technical Reports Server (NTRS)

    Chen, Margaret W.; Schulz, Michael; Lyons, Larry R.

    1993-01-01

    We perform a phase space mapping study to estimate the enhancement in energy content that results from stormtime particle transport in the equatorial magnetosphere. Our pre-storm phase space distribution is based on a steady-state transport model. Using results from guiding-center simulations of ion transport during model storms having main phases of 3 hr, 6 hr, and 12 hr, we map phase space distributions of ring current protons from the pre-storm distribution in accordance with Liouville's theorem. We find that transport can account for the entire ten to twenty-fold increase in magnetospheric particle energy content typical of a major storm if a realistic stormtime enhancement of the phase space density f is imposed at the nightside tail plasma sheet (represented by an enhancement of f at the neutral line in our model).

  7. Incorporating deliverable monitor unit constraints into spot intensity optimization in intensity modulated proton therapy treatment planning

    PubMed Central

    Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong

    2014-01-01

    The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656

  8. Combinatorial materials synthesis and high-throughput screening: an integrated materials chip approach to mapping phase diagrams and discovery and optimization of functional materials.

    PubMed

    Xiang, X D

    Combinatorial materials synthesis methods and high-throughput evaluation techniques have been developed to accelerate the process of materials discovery and optimization and phase-diagram mapping. Analogous to integrated circuit chips, integrated materials chips containing thousands of discrete different compositions or continuous phase diagrams, often in the form of high-quality epitaxial thin films, can be fabricated and screened for interesting properties. Microspot x-ray method, various optical measurement techniques, and a novel evanescent microwave microscope have been used to characterize the structural, optical, magnetic, and electrical properties of samples on the materials chips. These techniques are routinely used to discover/optimize and map phase diagrams of ferroelectric, dielectric, optical, magnetic, and superconducting materials.

  9. Mapping the Stacks: Sustainability and User Experience of Animated Maps in Library Discovery Interfaces

    ERIC Educational Resources Information Center

    McMillin, Bill; Gibson, Sally; MacDonald, Jean

    2016-01-01

    Animated maps of the library stacks were integrated into the catalog interface at Pratt Institute and into the EBSCO Discovery Service interface at Illinois State University. The mapping feature was developed for optimal automation of the update process to enable a range of library personnel to update maps and call-number ranges. The development…

  10. Optimal feedback strategies for pursuit-evasion and interception in a plane

    NASA Technical Reports Server (NTRS)

    Rajan, N.; Ardema, M. D.

    1983-01-01

    Variable-speed pursuit-evasion and interception for two aircraft moving in a horizontal plane are analyzed in terms of a coordinate frame fixed in the plane at termination. Each participant's optimal motion can be represented by extremal trajectory maps. These maps are used to discuss sub-optimal approximations that are independent of the other participant. A method of constructing sections of the barrier, dispersal, and control-level surfaces and thus determining feedback strategies is described. Some examples are shown for pursuit-evasion and the minimum-time interception of a straight-flying target.

  11. The GEDI Strategy for Improved Mapping of Forest Biomass and Structure

    NASA Astrophysics Data System (ADS)

    Dubayah, R.

    2017-12-01

    In 2014 the Committee on Earth Observation Satellites (CEOS) published a comprehensive report on approaches to meet future requirements for space-based observations of carbon. Entitled the CEOS Strategy for Carbon Observations from Space and endorsed by its member space agencies, the report outlines carbon information needs for climate and other policy, and how these needs may be met through existing and planned satellite missions. The CEOS Strategymakes recommendations for new, high-priority measurements. Among these is that space-based measurements using lidar should have priority to provide information on height, structure and biomass, complementing the existing and planned suite of SAR missions, such as the NASA NISAR and ESA BIOMASS missions. NASA's Global Ecosystem Dynamics Investigation (GEDI) directly meets this challenge. Scheduled for launch in late 2018 for deployment on the International Space Station, GEDI will provide more than 12 billion observations of canopy height, vertical structure and topography using a 10-beam lidar optimized for ecosystem measurements. Central to the success of GEDI is the development of calibration equations that relate observed forest structure to biomass at a variety of spatial scales. GEDI creates these calibrations by combining a large data base of field plot measurements with coincident airborne lidar observations that are used to simulate GEDI lidar waveforms. GEDI uses these relatively sparse footprint estimates of structure and biomass to create lower resolution, but spatially continuous grids of structure and biomass. GEDI is also developing radar/lidar fusion algorithms to produce higher-resolution, spatially continuous estimates of canopy height and biomass in collaboration with the German Aerospace Center (DLR). In this talk we present the current status of the GEDI calibration and validation program, and its approach for fusing its observations with the next generation of SAR sensors for improved mapping of forest structure from space. As stressed by the CEOS Strategy, the success of these efforts will critically depend on enhanced intra- and inter-mission calibration and validation activities, underpinned by an expanding network of in situ field observations, such as being implemented by GEDI.

  12. Fixed Point Results of Locally Contractive Mappings in Ordered Quasi-Partial Metric Spaces

    PubMed Central

    Arshad, Muhammad; Ahmad, Jamshaid

    2013-01-01

    Fixed point results for a self-map satisfying locally contractive conditions on a closed ball in an ordered 0-complete quasi-partial metric space have been established. Instead of monotone mapping, the notion of dominated mappings is applied. We have used weaker metric, weaker contractive conditions, and weaker restrictions to obtain unique fixed points. An example is given which shows that how this result can be used when the corresponding results cannot. Our results generalize, extend, and improve several well-known conventional results. PMID:24062629

  13. Power Control and Monitoring Requirements for Thermal Vacuum/Thermal Balance Testing of the MAP Observatory

    NASA Technical Reports Server (NTRS)

    Johnson, Chris; Hinkle, R. Kenneth (Technical Monitor)

    2002-01-01

    The specific heater control requirements for the thermal vacuum and thermal balance testing of the Microwave Anisotropy Probe (MAP) Observatory at the Goddard Space Flight Center (GSFC) in Greenbelt, Maryland are described. The testing was conducted in the 10m wide x 18.3m high Space Environment Simulator (SES) Thermal Vacuum Facility. The MAP thermal testing required accurate quantification of spacecraft and fixture power levels while minimizing heater electrical emissions. The special requirements of the MAP test necessitated construction of five (5) new heater racks.

  14. Harmonic maps of S into a complex Grassmann manifold.

    PubMed

    Chern, S S; Wolfson, J

    1985-04-01

    Let G(k, n) be the Grassmann manifold of all C(k) in C(n), the complex spaces of dimensions k and n, respectively, or, what is the same, the manifold of all projective spaces P(k-1) in P(n-1), so that G(1, n) is the complex projective space P(n-1) itself. We study harmonic maps of the two-dimensional sphere S(2) into G(k, n). The case k = 1 has been the subject of investigation by several authors [see, for example, Din, A. M. & Zakrzewski, W. J. (1980) Nucl. Phys. B 174, 397-406; Eells, J. & Wood, J. C. (1983) Adv. Math. 49, 217-263; and Wolfson, J. G. Trans. Am. Math. Soc., in press]. The harmonic maps S(2) --> G(2, 4) have been studied by Ramanathan [Ramanathan, J. (1984) J. Differ. Geom. 19, 207-219]. We shall describe all harmonic maps S(2) --> G(2, n). The method is based on several geometrical constructions, which lead from a given harmonic map to new harmonic maps, in which the image projective spaces are related by "fundamental collineations." The key result is the degeneracy of some fundamental collineations, which is a global consequence, following from the fact that the domain manifold is S(2). The method extends to G(k, n).

  15. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173

  16. Three-dimensional thermal structure of the South Polar Vortex of Venus

    NASA Astrophysics Data System (ADS)

    Hueso, Ricardo; Garate-Lopez, Itziar; Garcia-Muñoz, Antonio; Sánchez-Lavega, Agustín

    2014-11-01

    We have analyzed thermal infrared images provided by the VIRTIS-M instrument aboard Venus Express (VEX) to obtain high resolution thermal maps of the Venus south polar region between 55 and 85 km altitudes. The maps investigate three different dynamical configurations of the polar vortex including its classical dipolar shape, a regularly oval shape and a transition shape between the different configurations of the vortex. We apply the atmospheric model described by García Muñoz et al. (2013) and a variant of the retrieval algorithm detailed in Grassi et al. (2008) to obtain maps of temperature over the Venus south polar region in the quoted altitude range. These maps are discussed in terms of cloud motions and relative vorticity distribution obtained previously (Garate-Lopez et al. 2013). Temperature maps retrieved at 55 - 63 km show the same structures that are observed in the ~5 µm radiance images. This altitude range coincides with the optimal expected values of the cloud top altitude at polar latitudes and magnitudes derived from the analysis of ~5 µm images are measured at this altitude range. We also study the imprint of the vortex on the thermal field above the cloud level which extends up to 80 km. From the temperature maps, we also study the vertical stability of different atmospheric layers. The cold collar is clearly the most statically stable structure at polar latitudes, while the vortex and subpolar latitudes show lower stability values. Furthermore, the hot filaments present within the vortex at 55-63 km exhibit lower values of static stability than their immediate surroundings.ReferencesGarate-Lopez et al. Nat. Geosci. 6, 254-257 (2013).García Muñoz et al. Planet. Space Sci. 81, 65-73 (2013).Grassi, D. et al. J. Geophys. Res. 113, 1-12 (2008).AcknowledgementsWe thank ESA for supporting Venus Express, ASI, CNES and the other national space agencies supporting VIRTIS on VEX and their principal investigators G. Piccioni and P. Drossart. This work was supported by projects AYA2012-36666 with FEDER support, PRICI-S2009/ESP-1496, Grupos Gobierno Vasco IT-765-13 and by UPV/EHU through program UFI11/55. IGL and AGM acknowledge ESA/RSSD for hospitality and access to ‘The Grid’ computing resources.

  17. Overview of microseismic monitoring of hydraulic fracturing for unconventional oil and gas plays

    NASA Astrophysics Data System (ADS)

    Shemeta, J. E.

    2011-12-01

    The exponential growth of unconventional resources for oil and gas production has been driven by the use of horizontal drilling and hydraulic fracturing. These drilling and completion methods increase the contact area of the low permeability and porosity hydrocarbon bearing formations and allow for economic production in what was previously considered uncommercial rock. These new resource plays have sparked an enormous interest in microseismic monitoring of hydraulic fracture treatments. As a hydraulic fracture is pumped, microseismic events are emitted in a volume of rock surrounding the stimulated fracture. The goal of the monitoring is to identify and locate the microseismic events to a high degree of precision and to map the position of the induced hydraulic fracture in time and space. The microseismic events are very small, typically having a moment-magnitude range of -4 to 0. The microseismic data are collected using a variety of seismic array designs and instrumentation, including borehole, shallow borehole, near-surface and surface arrays, using either of three-component clamped 15 Hz borehole sondes to simple vertical 10 Hz geophones for surface monitoring. The collection and processing of these data is currently under rapid technical development. Each monitoring method has technical challenges which include accurate velocity modeling, correct seismic phase identification and signal to noise issues. The microseismic locations are used to guide hydrocarbon exploration and production companies in crucial reservoir development decisions such as the direction to drill the horizontal well bores and the appropriate inter-well spacing between horizontal wells to optimally drain the resource. The fracture mapping is also used to guide fracture and reservoir engineers in designing and calibrating the fluid volumes and types, injection rates and pressures for the hydraulic fracture treatments. The microseismic data can be located and mapped in near real-time during an injection and used to assist the operators in the avoidance of geohazards (such as a karst feature or fault) or fracture height growth into undesirable formations such as water-bearing zones (that could ruin the well). An important objective for hydraulic fracture mapping is to map the effective fracture geometry: the specific volume of rock that is contributing to hydrocarbon flow in to the well. This, however, still remains an elusive goal that has yet to be completely understood with the current mapping technology.

  18. [Site selection of nature reserve based on the self-learning tabu search algorithm with space-ecology set covering problem: An example from Daiyun Mountain, Southeast China].

    PubMed

    Huang, Jia Hang; Liu, Jin Fu; Lin, Zhi Wei; Zheng, Shi Qun; He, Zhong Sheng; Zhang, Hui Guang; Li, Wen Zhou

    2017-01-01

    Designing the nature reserves is an effective approach to protecting biodiversity. The traditional approaches to designing the nature reserves could only identify the core area for protecting the species without specifying an appropriate land area of the nature reserve. The site selection approaches, which are based on mathematical model, can select part of the land from the planning area to compose the nature reserve and to protect specific species or ecosystem. They are useful approaches to alleviating the contradiction between ecological protection and development. The existing site selection methods do not consider the ecological differences between each unit and has the bottleneck of computational efficiency in optimization algorithm. In this study, we first constructed the ecological value assessment system which was appropriated for forest ecosystem and that was used for calculating ecological value of Daiyun Mountain and for drawing its distribution map. Then, the Ecological Set Covering Problem (ESCP) was established by integrating the ecological values and then the Space-ecology Set Covering Problem (SSCP) was generated based on the spatial compactness of ESCP. Finally, the STS algorithm which possessed good optimizing performance was utilized to search the approximate optimal solution under diverse protection targets, and the optimization solution of the built-up area of Daiyun Mountain was proposed. According to the experimental results, the difference of ecological values in the spatial distribution was obvious. The ecological va-lue of selected sites of ESCP was higher than that of SCP. SSCP could aggregate the sites with high ecological value based on ESCP. From the results, the level of the aggregation increased with the weight of the perimeter. We suggested that the range of the existing reserve could be expanded for about 136 km 2 and the site of Tsuga longibracteata should be included, which was located in the northwest of the study area. Our research aimed at providing an optimization scheme for the sustai-nable development of Daiyun Mountain nature reserve and the optimal allocation of land resource, and a novel idea for designing the nature reserve of forest ecosystem in China.

  19. Study on the mapping of dark matter clustering from real space to redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yi; Song, Yong-Seon, E-mail: yizheng@kasi.re.kr, E-mail: ysong@kasi.re.kr

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown inmore » this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ''one-point' FoG term being independent of the separation vector between two different points, and 2) the ''correlated' FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k ∼ 0.2 Mpc{sup -1}, considering the resolution of future experiments.« less

  20. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  1. A Moving Mesh Finite Element Algorithm for Singular Problems in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Li, Ruo; Tang, Tao; Zhang, Pingwen

    2002-04-01

    A framework for adaptive meshes based on the Hamilton-Schoen-Yau theory was proposed by Dvinsky. In a recent work (2001, J. Comput. Phys.170, 562-588), we extended Dvinsky's method to provide an efficient moving mesh algorithm which compared favorably with the previously proposed schemes in terms of simplicity and reliability. In this work, we will further extend the moving mesh methods based on harmonic maps to deal with mesh adaptation in three space dimensions. In obtaining the variational mesh, we will solve an optimization problem with some appropriate constraints, which is in contrast to the traditional method of solving the Euler-Lagrange equation directly. The key idea of this approach is to update the interior and boundary grids simultaneously, rather than considering them separately. Application of the proposed moving mesh scheme is illustrated with some two- and three-dimensional problems with large solution gradients. The numerical experiments show that our methods can accurately resolve detail features of singular problems in 3D.

  2. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

  3. Characterizing air quality data from complex network perspective.

    PubMed

    Fan, Xinghua; Wang, Li; Xu, Huihui; Li, Shasha; Tian, Lixin

    2016-02-01

    Air quality depends mainly on changes in emission of pollutants and their precursors. Understanding its characteristics is the key to predicting and controlling air quality. In this study, complex networks were built to analyze topological characteristics of air quality data by correlation coefficient method. Firstly, PM2.5 (particulate matter with aerodynamic diameter less than 2.5 μm) indexes of eight monitoring sites in Beijing were selected as samples from January 2013 to December 2014. Secondly, the C-C method was applied to determine the structure of phase space. Points in the reconstructed phase space were considered to be nodes of the network mapped. Then, edges were determined by nodes having the correlation greater than a critical threshold. Three properties of the constructed networks, degree distribution, clustering coefficient, and modularity, were used to determine the optimal value of the critical threshold. Finally, by analyzing and comparing topological properties, we pointed out that similarities and difference in the constructed complex networks revealed influence factors and their different roles on real air quality system.

  4. The image of public space on planned housing based on environmental and behavior cognition mapping (case study: Cemara Asri Estate)

    NASA Astrophysics Data System (ADS)

    Nirfalini Aulia, Dwira; Zahara, Aina

    2018-03-01

    Public spaces in a planned housing is a place of social interaction for every visitor of public space. The research on public space image uses four public spaces that meet the criteria of public space such as pedestrian sidewalks, public park, water front and worship place. Research on the perception of public space is interesting to investigate because housing development is part of the forming of a society that should design with proper architectural considerations. The purpose of this research is to know the image of public space on the planned housing in Medan City based on the mapping of environmental and behavior cognition and to know the difference between the image that happened to four group respondent. The research method of architecture used in this research is a descriptive qualitative method with case study approach (most similar case). Analysis of data used using mental maps and questionnaires. Then the image of public space is formed based on the elements of public space, wayfinding, route choice, and movement. The image difference that occurs to the housing residents and architecture students, design and planning are outstanding, visitors to the public housing space is good, people who have never visited the public space is inadequate.

  5. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  6. Value-centric design architecture based on analysis of space system characteristics

    NASA Astrophysics Data System (ADS)

    Xu, Q.; Hollingsworth, P.; Smith, K.

    2018-03-01

    Emerging design concepts such as miniaturisation, modularity, and standardisation, have contributed to the rapid development of small and inexpensive platforms, particularly cubesats. This has been stimulating an upcoming revolution in space design and development, leading satellites into the era of "smaller, faster, and cheaper". However, the current requirement-centric design philosophy, focused on bespoke monolithic systems, along with the associated development and production process does not inherently fit with the innovative modular, standardised, and mass-produced technologies. This paper presents a new categorisation, characterisation, and value-centric design architecture to address this need for both traditional and novel system designs. Based on the categorisation of system configurations, a characterisation of space systems, comprised of duplication, fractionation, and derivation, is proposed to capture the overall system configuration characteristics and promote potential hybrid designs. Complying with the definitions of the system characterisation, mathematical mapping relations between the system characterisation and the system properties are described to establish the mathematical foundation of the proposed value-centric design methodology. To illustrate the methodology, subsystem reliability relationships are therefore analysed to explore potential system configurations in the design space. The results of the applications of system characteristic analysis clearly show that the effects of different configuration characteristics on the system properties can be effectively analysed and evaluated, enabling the optimization of system configurations.

  7. A unified theoretical framework for mapping models for the multi-state Hamiltonian.

    PubMed

    Liu, Jian

    2016-11-28

    We propose a new unified theoretical framework to construct equivalent representations of the multi-state Hamiltonian operator and present several approaches for the mapping onto the Cartesian phase space. After mapping an F-dimensional Hamiltonian onto an F+1 dimensional space, creation and annihilation operators are defined such that the F+1 dimensional space is complete for any combined excitation. Commutation and anti-commutation relations are then naturally derived, which show that the underlying degrees of freedom are neither bosons nor fermions. This sets the scene for developing equivalent expressions of the Hamiltonian operator in quantum mechanics and their classical/semiclassical counterparts. Six mapping models are presented as examples. The framework also offers a novel way to derive such as the well-known Meyer-Miller model.

  8. Three-dimensional full-field X-ray orientation microscopy

    PubMed Central

    Viganò, Nicola; Tanguy, Alexandre; Hallais, Simon; Dimanov, Alexandre; Bornert, Michel; Batenburg, Kees Joost; Ludwig, Wolfgang

    2016-01-01

    A previously introduced mathematical framework for full-field X-ray orientation microscopy is for the first time applied to experimental near-field diffraction data acquired from a polycrystalline sample. Grain by grain tomographic reconstructions using convex optimization and prior knowledge are carried out in a six-dimensional representation of position-orientation space, used for modelling the inverse problem of X-ray orientation imaging. From the 6D reconstruction output we derive 3D orientation maps, which are then assembled into a common sample volume. The obtained 3D orientation map is compared to an EBSD surface map and local misorientations, as well as remaining discrepancies in grain boundary positions are quantified. The new approach replaces the single orientation reconstruction scheme behind X-ray diffraction contrast tomography and extends the applicability of this diffraction imaging technique to material micro-structures exhibiting sub-grains and/or intra-granular orientation spreads of up to a few degrees. As demonstrated on textured sub-regions of the sample, the new framework can be extended to operate on experimental raw data, thereby bypassing the concept of orientation indexation based on diffraction spot peak positions. This new method enables fast, three-dimensional characterization with isotropic spatial resolution, suitable for time-lapse observations of grain microstructures evolving as a function of applied strain or temperature. PMID:26868303

  9. Determination of MLC model parameters for Monaco using commercial diode arrays.

    PubMed

    Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian

    2016-07-08

    Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors

  10. A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.

    PubMed

    Kim, Joo H; Roberts, Dustyn

    2015-09-01

    Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Protein Crystal Quality Studies

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Eddie Snell (standing), Post-Doctoral Fellow the National Research Council (NRC),and Marc Pusey of Marshall Space Flight Center (MSFC) use a reciprocal space mapping diffractometer for marcromolecular crystal quality studies. The diffractometer is used in mapping the structure of marcromolecules such as proteins to determine their structure and thus understand how they function with other proteins in the body. This is one of several analytical tools used on proteins crystalized on Earth and in space experiments. Photo credit: NASA/Marshall Space Flight Center (MSFC)

  12. APPLICATION OF A BIP CONSTRAINED OPTIMIZATION MODEL COMBINED WITH NASA's ATLAS MODEL TO OPTIMIZE THE SOCIETAL BENEFITS OF THE USA's INTERNATIONAL SPACE EXPLORATION AND UTILIZATION INITIATIVE OF 1/14/04

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.; Glover, Fred W.; Woodcock, Gordon R.; Laguna, Manuel

    2005-01-01

    The 1/14/04 USA Space Exploratiofltilization Initiative invites all Space-faring Nations, all Space User Groups in Science, Space Entrepreneuring, Advocates of Robotic and Human Space Exploration, Space Tourism and Colonization Promoters, etc., to join an International Space Partnership. With more Space-faring Nations and Space User Groups each year, such a Partnership would require Multi-year (35 yr.-45 yr.) Space Mission Planning. With each Nation and Space User Group demanding priority for its missions, one needs a methodology for obiectively selecting the best mission sequences to be added annually to this 45 yr. Moving Space Mission Plan. How can this be done? Planners have suggested building a Reusable, Sustainable, Space Transportation Infrastructure (RSSn) to increase Mission synergism, reduce cost, and increase scientific and societal returns from this Space Initiative. Morgenthaler and Woodcock presented a Paper at the 55th IAC, Vancouver B.C., Canada, entitled Constrained Optimization Models For Optimizing Multi - Year Space Programs. This Paper showed that a Binary Integer Programming (BIP) Constrained Optimization Model combined with the NASA ATLAS Cost and Space System Operational Parameter Estimating Model has the theoretical capability to solve such problems. IAA Commission III, Space Technology and Space System Development, in its ACADEMY DAY meeting at Vancouver, requested that the Authors and NASA experts find several Space Exploration Architectures (SEAS), apply the combined BIP/ATLAS Models, and report the results at the 56th Fukuoka IAC. While the mathematical Model is in Ref.[2] this Paper presents the Application saga of that effort.

  13. Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.

    In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less

  14. Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric

    DOE PAGES

    Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.; ...

    2015-10-09

    In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less

  15. Correlation between the Hurst exponent and the maximal Lyapunov exponent: Examining some low-dimensional conservative maps

    NASA Astrophysics Data System (ADS)

    Tarnopolski, Mariusz

    2018-01-01

    The Chirikov standard map and the 2D Froeschlé map are investigated. A few thousand values of the Hurst exponent (HE) and the maximal Lyapunov exponent (mLE) are plotted in a mixed space of the nonlinear parameter versus the initial condition. Both characteristic exponents reveal remarkably similar structures in this space. A tight correlation between the HEs and mLEs is found, with the Spearman rank ρ = 0 . 83 and ρ = 0 . 75 for the Chirikov and 2D Froeschlé maps, respectively. Based on this relation, a machine learning (ML) procedure, using the nearest neighbor algorithm, is performed to reproduce the HE distribution based on the mLE distribution alone. A few thousand HE and mLE values from the mixed spaces were used for training, and then using 2 - 2 . 4 × 105 mLEs, the HEs were retrieved. The ML procedure allowed to reproduce the structure of the mixed spaces in great detail.

  16. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James L.

    2010-01-01

    NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.

  17. Human Mind Maps

    ERIC Educational Resources Information Center

    Glass, Tom

    2016-01-01

    When students generate mind maps, or concept maps, the maps are usually on paper, computer screens, or a blackboard. Human Mind Maps require few resources and little preparation. The main requirements are space where students can move around and a little creativity and imagination. Mind maps can be used for a variety of purposes, and Human Mind…

  18. Optimization with artificial neural network systems - A mapping principle and a comparison to gradient based methods

    NASA Technical Reports Server (NTRS)

    Leong, Harrison Monfook

    1988-01-01

    General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.

  19. Aerodynamic shape optimization of wing and wing-body configurations using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.

  20. KSC-2013-3240

    NASA Image and Video Library

    2013-08-09

    CAPE CANAVERAL, Fla. – As seen on Google Maps, space shuttle Endeavour goes through transition and retirement processing in high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center. The spacecraft completed 25 missions beginning with its first flight, STS-49, in May 1992, and ending with STS-134 in May 2011. It helped construct the International Space Station in orbit and travelled more than 122 million miles in orbit during its career. The reaction control system pods in the shuttle's nose and aft section were removed for processing before Endeavour was put on public display at the California Science Center in Los Angeles. Google precisely mapped the space center and some of its historical facilities for the company's map page. The work allows Internet users to see inside buildings at Kennedy as they were used during the space shuttle era. Photo credit: Google/Wendy Wang

  1. EnviroAtlas - Austin, TX - Estimated Percent Green Space Along Walkable Roads

    EPA Pesticide Factsheets

    This EnviroAtlas dataset estimates green space along walkable roads. Green space within 25 meters of the road centerline is included and the percentage is based on the total area between street intersections. Green space provides valuable benefits to neighborhood residents and walkers by providing shade, improved aesthetics, and outdoor gathering spaces. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  2. Mapping the space radiation environment in LEO orbit by the SATRAM Timepix payload on board the Proba-V satellite

    NASA Astrophysics Data System (ADS)

    Granja, Carlos; Polansky, Stepan

    2016-07-01

    Detailed spatial- and time-correlated maps of the space radiation environment in Low Earth Orbit (LEO) are produced by the spacecraft payload SATRAM operating in open space on board the Proba-V satellite from the European Space Agency (ESA). Equipped with the hybrid semiconductor pixel detector Timepix, the compact radiation monitor payload provides the composition and spectral characterization of the mixed radiation field with quantum-counting and imaging dosimetry sensitivity, energetic charged particle tracking, directionality and energy loss response in wide dynamic range in terms of particle types, dose rates and particle fluxes. With a polar orbit (sun synchronous, 98° inclination) at the altitude of 820 km the payload samples the space radiation field at LEO covering basically the whole planet. First results of long-period data evaluation in the form of time-and spatially-correlated maps of total dose rate (all particles) are given.

  3. On the mapping associated with the complex representation of functions and processes.

    NASA Technical Reports Server (NTRS)

    Harger, R. O.

    1972-01-01

    The mapping between function spaces that is implied by the representation of a real 'bandpass' function by a complex 'low-pass' function is explicitly accepted. The discussion is extended to the representation of stationary random processes where the mapping is between spaces of random processes. This approach clarifies the nature of the complex representation, especially in the case of random processes and, in addition, derives the properties of the complex representation.-

  4. New Strategy for Exploration Technology Development: The Human Exploration and Development of Space (HEDS) Exploration/Commercialization Technology Initiative

    NASA Technical Reports Server (NTRS)

    Mankins, John C.

    2000-01-01

    In FY 2001, NASA will undertake a new research and technology program supporting the goals of human exploration: the Human Exploration and Development of Space (HEDS) Exploration/Commercialization Technology Initiative (HTCI). The HTCI represents a new strategic approach to exploration technology, in which an emphasis will be placed on identifying and developing technologies for systems and infrastructures that may be common among exploration and commercial development of space objectives. A family of preliminary strategic research and technology (R&T) road maps have been formulated that address "technology for human exploration and development of space (THREADS). These road maps frame and bound the likely content of the HTCL Notional technology themes for the initiative include: (1) space resources development, (2) space utilities and power, (3) habitation and bioastronautics, (4) space assembly, inspection and maintenance, (5) exploration and expeditions, and (6) space transportation. This paper will summarize the results of the THREADS road mapping process and describe the current status and content of the HTCI within that framework. The paper will highlight the space resources development theme within the Initiative and will summarize plans for the coming year.

  5. Janus configurations with SL(2, ℤ)-duality twists, strings on mapping tori and a tridiagonal determinant formula

    NASA Astrophysics Data System (ADS)

    Ganor, Ori J.; Moore, Nathan P.; Sun, Hao-Yu; Torres-Chicon, Nesty R.

    2014-07-01

    We develop an equivalence between two Hilbert spaces: (i) the space of states of U(1) n Chern-Simons theory with a certain class of tridiagonal matrices of coupling constants (with corners) on T 2; and (ii) the space of ground states of strings on an associated mapping torus with T 2 fiber. The equivalence is deduced by studying the space of ground states of SL(2, ℤ)-twisted circle compactifications of U(1) gauge theory, connected with a Janus configuration, and further compactified on T 2. The equality of dimensions of the two Hilbert spaces (i) and (ii) is equivalent to a known identity on determinants of tridiagonal matrices with corners. The equivalence of operator algebras acting on the two Hilbert spaces follows from a relation between the Smith normal form of the Chern-Simons coupling constant matrix and the isometry group of the mapping torus, as well as the torsion part of its first homology group.

  6. A tutorial in displaying mass spectrometry-based proteomic data using heat maps.

    PubMed

    Key, Melissa

    2012-01-01

    Data visualization plays a critical role in interpreting experimental results of proteomic experiments. Heat maps are particularly useful for this task, as they allow us to find quantitative patterns across proteins and biological samples simultaneously. The quality of a heat map can be vastly improved by understanding the options available to display and organize the data in the heat map. This tutorial illustrates how to optimize heat maps for proteomics data by incorporating known characteristics of the data into the image. First, the concepts used to guide the creating of heat maps are demonstrated. Then, these concepts are applied to two types of analysis: visualizing spectral features across biological samples, and presenting the results of tests of statistical significance. For all examples we provide details of computer code in the open-source statistical programming language R, which can be used for biologists and clinicians with little statistical background. Heat maps are a useful tool for presenting quantitative proteomic data organized in a matrix format. Understanding and optimizing the parameters used to create the heat map can vastly improve both the appearance and the interoperation of heat map data.

  7. Measurement of Air Pollution from Satellites (MAPS) 1994 Correlative Atmospheric Carbon Monoxide Mixing Ratios (DB-1020)

    DOE Data Explorer

    Novelli, Paul [NOAA Climate Monitoring and Diagnostics Lab (CMDL), Boulder, Colorado; Masarie, Ken [Cooperative Institute for Research in Environmental Sciences (CIRES), University of Colorado, Boulder, Colorado

    1998-01-01

    This database offers select carbon monoxide (CO) mixing ratios from eleven field and aircraft measurement programs around the world. Carbon monoxide mixing ratios in the middle troposphere have been examined for short periods of time by using the Measurement of Air Pollution from Satellites (MAPS) instrument. MAPS measures CO from a space platform, using gas filter correlation radiometry. During the 1981 and 1984 MAPS flights, measurement validation was attempted by comparing space-based measurements of CO to those made in the middle troposphere from aircraft. Before the 1994 MAPS flights aboard the space shuttle Endeavour, a correlative measurement team was assembled to provide the National Aeronautics and Space Administration (NASA) with results of their CO field measurement programs during the April and October shuttle missions. To maximize the usefulness of these correlative data, team members agreed to participate in an intercomparison of CO measurements. The correlative data presented in this database provide an internally consistent, ground-based picture of CO in the lower atmosphere during Spring and Fall 1994. The data show the regional importance of two CO sources: fossil-fuel burning in urbanized areas and biomass burning in regions in the Southern Hemisphere.

  8. Use of electromagnetic-terrain conductivity and DC-resistivity profiling techniques for bedrock characterization at the 15th-of-May City extension, Cairo, Egypt

    NASA Astrophysics Data System (ADS)

    Aly, Said A.; Farag, Karam S. I.; Atya, Magdy A.; Badr, Mohamed A. M.

    2018-06-01

    A joint multi-spacing electromagnetic-terrain conductivity meter and DC-resistivity horizontal profiling survey was conducted at the anticipated eastern extensional area of the 15th-of-May City, southeastern Cairo, Egypt. The main objective of the survey was to highlight the applicability, efficiency, and reliability of utilizing such non-invasive surface techniques in a field like geologic mapping, and hence to image both the vertical and lateral electrical resistivity structures of the subsurface bedrock. Consequently, a total of reliable 6 multi-spacing electromagnetic-terrain conductivity meter and 7 DC-resistivity horizontal profiles were carried out between August 2016 and February 2017. All data sets were transformed-inverted extensively and consistently in terms of two-dimensional (2D) electrical resistivity smoothed-earth models. They could be used effectively and inexpensively to interpret the area's bedrock geologic sequence using the encountered consecutive electrically resistive and conductive anomalies. Notably, the encountered subsurface electrical resistivity structures, below all surveying profiles, are correlated well with the mapped geological faults in the field. They even could provide a useful understanding of their faulting fashion. Absolute resistivity values were not necessarily diagnostic, but their vertical and lateral variations could provide more diagnostic information about the layer lateral extensions and thicknesses, and hence suggested reliable geo-electric earth models. The study demonstrated that a detailed multi-spacing electromagnetic-terrain conductivity meter and DC-resistivity horizontal profiling survey can help design an optimal geotechnical investigative program, not only for the whole eastern extensional area of the 15th-of-May City, but also for the other new urban communities within the Egyptian desert.

  9. CSP-TSM: Optimizing the performance of Riemannian tangent space mapping using common spatial pattern for MI-BCI.

    PubMed

    Kumar, Shiu; Mamun, Kabir; Sharma, Alok

    2017-12-01

    Classification of electroencephalography (EEG) signals for motor imagery based brain computer interface (MI-BCI) is an exigent task and common spatial pattern (CSP) has been extensively explored for this purpose. In this work, we focused on developing a new framework for classification of EEG signals for MI-BCI. We propose a single band CSP framework for MI-BCI that utilizes the concept of tangent space mapping (TSM) in the manifold of covariance matrices. The proposed method is named CSP-TSM. Spatial filtering is performed on the bandpass filtered MI EEG signal. Riemannian tangent space is utilized for extracting features from the spatial filtered signal. The TSM features are then fused with the CSP variance based features and feature selection is performed using Lasso. Linear discriminant analysis (LDA) is then applied to the selected features and finally classification is done using support vector machine (SVM) classifier. The proposed framework gives improved performance for MI EEG signal classification in comparison with several competing methods. Experiments conducted shows that the proposed framework reduces the overall classification error rate for MI-BCI by 3.16%, 5.10% and 1.70% (for BCI Competition III dataset IVa, BCI Competition IV Dataset I and BCI Competition IV Dataset IIb, respectively) compared to the conventional CSP method under the same experimental settings. The proposed CSP-TSM method produces promising results when compared with several competing methods in this paper. In addition, the computational complexity is less compared to that of TSM method. Our proposed CSP-TSM framework can be potentially used for developing improved MI-BCI systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. The development and optimisation of 3D black-blood R2* mapping of the carotid artery wall.

    PubMed

    Yuan, Jianmin; Graves, Martin J; Patterson, Andrew J; Priest, Andrew N; Ruetten, Pascal P R; Usman, Ammara; Gillard, Jonathan H

    2017-12-01

    To develop and optimise a 3D black-blood R 2 * mapping sequence for imaging the carotid artery wall, using optimal blood suppression and k-space view ordering. Two different blood suppression preparation methods were used; Delay Alternating with Nutation for Tailored Excitation (DANTE) and improved Motion Sensitive Driven Equilibrium (iMSDE) were each combined with a three-dimensional (3D) multi-echo Fast Spoiled GRadient echo (ME-FSPGR) readout. Three different k-space view-order designs: Radial Fan-beam Encoding Ordering (RFEO), Distance-Determined Encoding Ordering (DDEO) and Centric Phase Encoding Order (CPEO) were investigated. The sequences were evaluated through Bloch simulation and in a cohort of twenty volunteers. The vessel wall Signal-to-Noise Ratio (SNR), Contrast-to-Noise Ratio (CNR) and R 2 *, and the sternocleidomastoid muscle R 2 * were measured and compared. Different numbers of acquisitions-per-shot (APS) were evaluated to further optimise the effectiveness of blood suppression. All sequences resulted in comparable R 2 * measurements to a conventional, i.e. non-blood suppressed sequence in the sternocleidomastoid muscle of the volunteers. Both Bloch simulations and volunteer data showed that DANTE has a higher signal intensity and results in a higher image SNR than iMSDE. Blood suppression efficiency was not significantly different when using different k-space view orders. Smaller APS achieved better blood suppression. The use of blood-suppression preparation methods does not affect the measurement of R 2 *. DANTE prepared ME-FSPGR sequence with a small number of acquisitions-per-shot can provide high quality black-blood R 2 * measurements of the carotid vessel wall. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  12. Robust fuel- and time-optimal control of uncertain flexible space structures

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken

    1993-01-01

    The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.

  13. The canonical quantization of chaotic maps on the torus

    NASA Astrophysics Data System (ADS)

    Rubin, Ron Shai

    In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.

  14. Maximum entropy modeling of metabolic networks by constraining growth-rate moments predicts coexistence of phenotypes

    NASA Astrophysics Data System (ADS)

    De Martino, Daniele

    2017-12-01

    In this work maximum entropy distributions in the space of steady states of metabolic networks are considered upon constraining the first and second moments of the growth rate. Coexistence of fast and slow phenotypes, with bimodal flux distributions, emerges upon considering control on the average growth (optimization) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of Escherichia coli where it quantifies the metabolic activity of slow growing phenotypes and it provides a quantitative map with metabolic fluxes, opening the possibility to detect coexistence from flux data. A preliminary analysis on data for E. coli cultures in standard conditions shows degeneracy for the inferred parameters that extend in the coexistence region.

  15. Microgravity

    NASA Image and Video Library

    1998-06-16

    Eddie Snell, Post-Doctoral Fellow the National Research Council (NRC) uses a reciprocal space mapping diffractometer for macromolecular crystal quality studies. The diffractometer is used in mapping the structure of macromolecules such as proteins to determine their structure and thus understand how they function with other proteins in the body. This is one of several analytical tools used on proteins crystallized on Earth and in space experiments. Photo credit: NASA/Marshall Space Flight Center (MSFC)

  16. Protein Crystal Quality Studies

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Eddie Snell, Post-Doctoral Fellow the National Research Council (NRC) uses a reciprocal space mapping diffractometer for macromolecular crystal quality studies. The diffractometer is used in mapping the structure of macromolecules such as proteins to determine their structure and thus understand how they function with other proteins in the body. This is one of several analytical tools used on proteins crystallized on Earth and in space experiments. Photo credit: NASA/Marshall Space Flight Center (MSFC)

  17. Experimental and Automated Analysis Techniques for High-resolution Electrical Mapping of Small Intestine Slow Wave Activity

    PubMed Central

    Angeli, Timothy R; O'Grady, Gregory; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; Du, Peng; Pullan, Andrew J; Bissett, Ian P

    2013-01-01

    Background/Aims Small intestine motility is governed by an electrical slow wave activity, and abnormal slow wave events have been associated with intestinal dysmotility. High-resolution (HR) techniques are necessary to analyze slow wave propagation, but progress has been limited by few available electrode options and laborious manual analysis. This study presents novel methods for in vivo HR mapping of small intestine slow wave activity. Methods Recordings were obtained from along the porcine small intestine using flexible printed circuit board arrays (256 electrodes; 4 mm spacing). Filtering options were compared, and analysis was automated through adaptations of the falling-edge variable-threshold (FEVT) algorithm and graphical visualization tools. Results A Savitzky-Golay filter was chosen with polynomial-order 9 and window size 1.7 seconds, which maintained 94% of slow wave amplitude, 57% of gradient and achieved a noise correction ratio of 0.083. Optimized FEVT parameters achieved 87% sensitivity and 90% positive-predictive value. Automated activation mapping and animation successfully revealed slow wave propagation patterns, and frequency, velocity, and amplitude were calculated and compared at 5 locations along the intestine (16.4 ± 0.3 cpm, 13.4 ± 1.7 mm/sec, and 43 ± 6 µV, respectively, in the proximal jejunum). Conclusions The methods developed and validated here will greatly assist small intestine HR mapping, and will enable experimental and translational work to evaluate small intestine motility in health and disease. PMID:23667749

  18. Range image registration based on hash map and moth-flame optimization

    NASA Astrophysics Data System (ADS)

    Zou, Li; Ge, Baozhen; Chen, Lei

    2018-03-01

    Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.

  19. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James

    2014-01-01

    NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.

  20. Fast Interrogation of Fiber Bragg Gratings with Electro-Optical Dual Optical Frequency Combs

    PubMed Central

    Posada-Roman, Julio E.; Garcia-Souto, Jose A.; Poiana, Dragos A.; Acedo, Pablo

    2016-01-01

    Optical frequency combs (OFC) generated by electro-optic modulation of continuous-wave lasers provide broadband coherent sources with high power per line and independent control of line spacing and the number of lines. In addition to their application in spectroscopy, they offer flexible and optimized sources for the interrogation of other sensors based on wavelength change or wavelength filtering, such as fiber Bragg grating (FBG) sensors. In this paper, a dual-OFC FBG interrogation system based on a single laser and two optical-phase modulators is presented. This architecture allows for the configuration of multimode optical source parameters such as the number of modes and their position within the reflected spectrum of the FBG. A direct read-out is obtained by mapping the optical spectrum onto the radio-frequency spectrum output of the dual-comb. This interrogation scheme is proposed for measuring fast phenomena such as vibrations and ultrasounds. Results are presented for dual-comb operation under optimized control. The optical modes are mapped onto detectable tones that are multiples of 0.5 MHz around a center radiofrequency tone (40 MHz). Measurements of ultrasounds (40 kHz and 120 kHz) are demonstrated with this sensing system. Ultrasounds induce dynamic strain onto the fiber, which generates changes in the reflected Bragg wavelength and, hence, modulates the amplitude of the OFC modes within the reflected spectrum. The amplitude modulation of two counterphase tones is detected to obtain a differential measurement proportional to the ultrasound signal. PMID:27898043

  1. Fast Interrogation of Fiber Bragg Gratings with Electro-Optical Dual Optical Frequency Combs.

    PubMed

    Posada-Roman, Julio E; Garcia-Souto, Jose A; Poiana, Dragos A; Acedo, Pablo

    2016-11-26

    Optical frequency combs (OFC) generated by electro-optic modulation of continuous-wave lasers provide broadband coherent sources with high power per line and independent control of line spacing and the number of lines. In addition to their application in spectroscopy, they offer flexible and optimized sources for the interrogation of other sensors based on wavelength change or wavelength filtering, such as fiber Bragg grating (FBG) sensors. In this paper, a dual-OFC FBG interrogation system based on a single laser and two optical-phase modulators is presented. This architecture allows for the configuration of multimode optical source parameters such as the number of modes and their position within the reflected spectrum of the FBG. A direct read-out is obtained by mapping the optical spectrum onto the radio-frequency spectrum output of the dual-comb. This interrogation scheme is proposed for measuring fast phenomena such as vibrations and ultrasounds. Results are presented for dual-comb operation under optimized control. The optical modes are mapped onto detectable tones that are multiples of 0.5 MHz around a center radiofrequency tone (40 MHz). Measurements of ultrasounds (40 kHz and 120 kHz) are demonstrated with this sensing system. Ultrasounds induce dynamic strain onto the fiber, which generates changes in the reflected Bragg wavelength and, hence, modulates the amplitude of the OFC modes within the reflected spectrum. The amplitude modulation of two counterphase tones is detected to obtain a differential measurement proportional to the ultrasound signal.

  2. EnviroAtlas - Cleveland, OH - Estimated Percent Green Space Along Walkable Roads

    EPA Pesticide Factsheets

    This EnviroAtlas dataset estimates green space along walkable roads. Green space within 25 meters of the road centerline is included and the percentage is based on the total area between street intersections. In this community, green space is defined as Trees & Forest, Grass & Herbaceous, Woody Wetlands, and Emergent Wetlands. In this metric, water is also included in green space. Green space provides valuable benefits to neighborhood residents and walkers by providing shade, improved aesthetics, and outdoor gathering spaces. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  3. EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Percent Green Space Along Walkable Roads

    EPA Pesticide Factsheets

    This EnviroAtlas dataset estimates green space along walkable roads. Green space within 25 meters of the road centerline is included and the percentage is based on the total area between street intersections. In this community, green space is defined as Trees and Forest, Grass and Herbaceous, Agriculture, Woody Wetlands, and Emergent Wetlands. In this metric, water is also included in green space. Green space provides valuable benefits to neighborhood residents and walkers by providing shade, improved aesthetics, and outdoor gathering spaces. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas/EnviroAtlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  4. Optimizing the Usability of Brain-Computer Interfaces.

    PubMed

    Zhang, Yin; Chase, Steve M

    2018-05-01

    Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.

  5. Particle swarm optimization based space debris surveillance network scheduling

    NASA Astrophysics Data System (ADS)

    Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao

    2017-02-01

    The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.

  6. MIDACO on MINLP space applications

    NASA Astrophysics Data System (ADS)

    Schlueter, Martin; Erb, Sven O.; Gerdts, Matthias; Kemble, Stephen; Rückmann, Jan-J.

    2013-04-01

    A numerical study on two challenging mixed-integer non-linear programming (MINLP) space applications and their optimization with MIDACO, a recently developed general purpose optimization software, is presented. These applications are the optimal control of the ascent of a multiple-stage space launch vehicle and the space mission trajectory design from Earth to Jupiter using multiple gravity assists. Additionally, an NLP aerospace application, the optimal control of an F8 aircraft manoeuvre, is discussed and solved. In order to enhance the optimization performance of MIDACO a hybridization technique, coupling MIDACO with an SQP algorithm, is presented for two of these three applications. The numerical results show, that the applications can be solved to their best known solution (or even new best solution) in a reasonable time by the considered approach. Since using the concept of MINLP is still a novelty in the field of (aero)space engineering, the demonstrated capabilities are seen as very promising.

  7. Mapping quantum-classical Liouville equation: projectors and trajectories.

    PubMed

    Kelly, Aaron; van Zon, Ramses; Schofield, Jeremy; Kapral, Raymond

    2012-02-28

    The evolution of a mixed quantum-classical system is expressed in the mapping formalism where discrete quantum states are mapped onto oscillator states, resulting in a phase space description of the quantum degrees of freedom. By defining projection operators onto the mapping states corresponding to the physical quantum states, it is shown that the mapping quantum-classical Liouville operator commutes with the projection operator so that the dynamics is confined to the physical space. It is also shown that a trajectory-based solution of this equation can be constructed that requires the simulation of an ensemble of entangled trajectories. An approximation to this evolution equation which retains only the Poisson bracket contribution to the evolution operator does admit a solution in an ensemble of independent trajectories but it is shown that this operator does not commute with the projection operators and the dynamics may take the system outside the physical space. The dynamical instabilities, utility, and domain of validity of this approximate dynamics are discussed. The effects are illustrated by simulations on several quantum systems.

  8. Augmented paper maps: Exploring the design space of a mixed reality system

    NASA Astrophysics Data System (ADS)

    Paelke, Volker; Sester, Monika

    Paper maps and mobile electronic devices have complementary strengths and shortcomings in outdoor use. In many scenarios, like small craft sailing or cross-country trekking, a complete replacement of maps is neither useful nor desirable. Paper maps are fail-safe, relatively cheap, offer superior resolution and provide large scale overview. In uses like open-water sailing it is therefore mandatory to carry adequate maps/charts. GPS based mobile devices, on the other hand, offer useful features like automatic positioning and plotting, real-time information update and dynamic adaptation to user requirements. While paper maps are now commonly used in combination with mobile GPS devices, there is no meaningful integration between the two, and the combined use leads to a number of interaction problems and potential safety issues. In this paper we explore the design space of augmented paper maps in which maps are augmented with additional functionality through a mobile device to achieve a meaningful integration between device and map that combines their respective strengths.

  9. Maps on statistical manifolds exactly reduced from the Perron-Frobenius equations for solvable chaotic maps

    NASA Astrophysics Data System (ADS)

    Goto, Shin-itiro; Umeno, Ken

    2018-03-01

    Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps, where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.

  10. The generalized Lyapunov theorem and its application to quantum channels

    NASA Astrophysics Data System (ADS)

    Burgarth, Daniel; Giovannetti, Vittorio

    2007-05-01

    We give a simple and physically intuitive necessary and sufficient condition for a map acting on a compact metric space to be mixing (i.e. infinitely many applications of the map transfer any input into a fixed convergency point). This is a generalization of the 'Lyapunov direct method'. First we prove this theorem in topological spaces and for arbitrary continuous maps. Finally we apply our theorem to maps which are relevant in open quantum systems and quantum information, namely quantum channels. In this context, we also discuss the relations between mixing and ergodicity (i.e. the property that there exists only a single input state which is left invariant by a single application of the map) showing that the two are equivalent when the invariant point of the ergodic map is pure.

  11. Linear time-to-space mapping system using double electrooptic beam deflectors.

    PubMed

    Hisatake, Shintaro; Tada, Keiji; Nagatsuma, Tadao

    2008-12-22

    We propose and demonstrate a linear time-to-space mapping system, which is based on two times electrooptic sinusoidal beam deflection. The direction of each deflection is set to be mutually orthogonal with the relative deflection phase of pi/2 rad so that the circular optical beam trajectory can be achieved. The beam spot at the observation plane moves with an uniform velocity and as a result linear time-to-space mapping (an uniform temporal resolution through the mapping) can be realized. The proof-of-concept experiment are carried out and the temporal resolution of 5 ps has been demonstrated using traveling-wave type quasi-velosity-matched electrooptic beam deflectors. The developed system is expected to be applied to characterization of ultrafast optical signal or optical arbitrary waveform shaping for modulated microwave/millimeter-wave generation.

  12. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    PubMed

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.

  13. Quantifying the effect of colorization enhancement on mammogram images

    NASA Astrophysics Data System (ADS)

    Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia

    2002-04-01

    Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).

  14. Dynamic tensile deformation and damage of B 4C-reinforced Al composites: Time-resolved imaging with synchrotron x-rays

    DOE PAGES

    Bie, B. X.; Huang, J. Y.; Su, B.; ...

    2016-03-30

    Dynamic tensile experiments are conducted on 15% and 30% in weight percentage B 4C/Al composites with a split Hopkinson tension bar, along with high-speed synchrotron x-ray digital image correlation (XDIC) to map strain fields at μ m and μ s scales. As manifested by bulk-scale stress – strain curves, a higher particle content leads to a higher yield strength but lower ductility. Strain field mapping by XDIC demonstrates that tension deformation and tensile fracture, as opposed to shear and shear failure, dominate deformation and failure of the composites. The fractographs of recovered samples show consistent features. The particle-matrix interfaces aremore » nucleation sites for strain localizations, and their propagation and coalescence are diffused by the Al matrix. The reduced spacing between strain localization sites with increasing particle content, facilitates their coalescence and leads to decreased ductility. Furthermore, designing a particle-reinforced, metallic-matrix composite with balanced strength and ductility should consider optimizing the inter-particle distance as a key par« less

  15. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  16. Mapping carbon flux uncertainty and selecting optimal locations for future flux towers in the Great Plains

    USGS Publications Warehouse

    Gu, Yingxin; Howard, Daniel M.; Wylie, Bruce K.; Zhang, Li

    2012-01-01

    Flux tower networks (e. g., AmeriFlux, Agriflux) provide continuous observations of ecosystem exchanges of carbon (e. g., net ecosystem exchange), water vapor (e. g., evapotranspiration), and energy between terrestrial ecosystems and the atmosphere. The long-term time series of flux tower data are essential for studying and understanding terrestrial carbon cycles, ecosystem services, and climate changes. Currently, there are 13 flux towers located within the Great Plains (GP). The towers are sparsely distributed and do not adequately represent the varieties of vegetation cover types, climate conditions, and geophysical and biophysical conditions in the GP. This study assessed how well the available flux towers represent the environmental conditions or "ecological envelopes" across the GP and identified optimal locations for future flux towers in the GP. Regression-based remote sensing and weather-driven net ecosystem production (NEP) models derived from different extrapolation ranges (10 and 50%) were used to identify areas where ecological conditions were poorly represented by the flux tower sites and years previously used for mapping grassland fluxes. The optimal lands suitable for future flux towers within the GP were mapped. Results from this study provide information to optimize the usefulness of future flux towers in the GP and serve as a proxy for the uncertainty of the NEP map.

  17. Relationships between cerebral autoregulation and markers of kidney and liver injury in neonatal encephalopathy and therapeutic hypothermia.

    PubMed

    Lee, J K; Perin, J; Parkinson, C; O'Connor, M; Gilmore, M M; Reyes, M; Armstrong, J; Jennings, J M; Northington, F J; Chavez-Valdez, R

    2017-08-01

    We studied whether cerebral blood pressure autoregulation and kidney and liver injuries are associated in neonatal encephalopathy (NE). We monitored autoregulation of 75 newborns who received hypothermia for NE in the neonatal intensive care unit to identify the mean arterial blood pressure with optimized autoregulation (MAP OPT ). Autoregulation parameters and creatinine, aspartate aminotransferase (AST) and alanine aminotransferase (ALT) were analyzed using adjusted regression models. Greater time with blood pressure within MAP OPT during hypothermia was associated with lower creatinine in girls. Blood pressure below MAP OPT related to higher ALT and AST during normothermia in all neonates and boys. The opposite occurred in rewarming when more time with blood pressure above MAP OPT related to higher AST. Blood pressures that optimize cerebral autoregulation may support the kidneys. Blood pressures below MAP OPT and liver injury during normothermia are associated. The relationship between MAP OPT and AST during rewarming requires further study.

  18. Optimized MLAA for quantitative non-TOF PET/MR of the brain

    NASA Astrophysics Data System (ADS)

    Benoit, Didier; Ladefoged, Claes N.; Rezaei, Ahmadreza; Keller, Sune H.; Andersen, Flemming L.; Højgaard, Liselotte; Hansen, Adam E.; Holm, Søren; Nuyts, Johan

    2016-12-01

    For quantitative tracer distribution in positron emission tomography, attenuation correction is essential. In a hybrid PET/CT system the CT images serve as a basis for generation of the attenuation map, but in PET/MR, the MR images do not have a similarly simple relationship with the attenuation map. Hence attenuation correction in PET/MR systems is more challenging. Typically either of two MR sequences are used: the Dixon or the ultra-short time echo (UTE) techniques. However these sequences have some well-known limitations. In this study, a reconstruction technique based on a modified and optimized non-TOF MLAA is proposed for PET/MR brain imaging. The idea is to tune the parameters of the MLTR applying some information from an attenuation image computed from the UTE sequences and a T1w MR image. In this MLTR algorithm, an {αj} parameter is introduced and optimized in order to drive the algorithm to a final attenuation map most consistent with the emission data. Because the non-TOF MLAA is used, a technique to reduce the cross-talk effect is proposed. In this study, the proposed algorithm is compared to the common reconstruction methods such as OSEM using a CT attenuation map, considered as the reference, and OSEM using the Dixon and UTE attenuation maps. To show the robustness and the reproducibility of the proposed algorithm, a set of 204 [18F]FDG patients, 35 [11C]PiB patients and 1 [18F]FET patient are used. The results show that by choosing an optimized value of {αj} in MLTR, the proposed algorithm improves the results compared to the standard MR-based attenuation correction methods (i.e. OSEM using the Dixon or the UTE attenuation maps), and the cross-talk and the scale problem are limited.

  19. CRISM Multispectral and Hyperspectral Mapping Data - A Global Data Set for Hydrated Mineral Mapping

    NASA Astrophysics Data System (ADS)

    Seelos, F. P.; Hash, C. D.; Murchie, S. L.; Lim, H.

    2017-12-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a visible through short-wave infrared hyperspectral imaging spectrometer (VNIR S-detector: 364-1055 nm; IR L-detector: 1001-3936 nm; 6.55 nm sampling) that has been in operation on the Mars Reconnaissance Orbiter (MRO) since 2006. Over the course of the MRO mission, CRISM has acquired 290,000 individual mapping observation segments (mapping strips) with a variety of observing modes and data characteristics (VNIR/IR; 100/200 m/pxl; multi-/hyper-spectral band selection) over a wide range of observing conditions (atmospheric state, observation geometry, instrument state). CRISM mapping data coverage density varies primarily with latitude and secondarily due to seasonal and operational considerations. The aggregate global IR mapping data coverage currently stands at 85% ( 80% at the equator with 40% repeat sampling), which is sufficient spatial sampling density to support the assembly of empirically optimized radiometrically consistent mapping mosaic products. The CRISM project has defined a number of mapping mosaic data products (e.g. Multispectral Reduced Data Record (MRDR) map tiles) with varying degrees of observation-specific processing and correction applied prior to mosaic assembly. A commonality among the mosaic products is the presence of inter-observation radiometric discrepancies which are traceable to variable observation circumstances or associated atmospheric/photometric correction residuals. The empirical approach to radiometric reconciliation leverages inter-observation spatial overlaps and proximal relationships to construct a graph that encodes the mosaic structure and radiometric discrepancies. The graph theory abstraction allows the underling structure of the msaic to be evaluated and the corresponding optimization problem configured so it is well-posed. Linear and non-linear least squares optimization is then employed to derive a set of observation- and wavelength- specific model parameters for a series of transform functions that minimize the total radiometric discrepancy across the mosaic. This empirical approach to CRISM data radiometric reconciliation and the utility of the resulting mapping data mosaic products for hydrated mineral mapping will be presented.

  20. Perioperative optimal blood pressure as determined by ultrasound tagged near infrared spectroscopy and its association with postoperative acute kidney injury in cardiac surgery patients.

    PubMed

    Hori, Daijiro; Hogue, Charles; Adachi, Hideo; Max, Laura; Price, Joel; Sciortino, Christopher; Zehr, Kenton; Conte, John; Cameron, Duke; Mandal, Kaushik

    2016-04-01

    Perioperative blood pressure management by targeting individualized optimal blood pressure, determined by cerebral blood flow autoregulation monitoring, may ensure sufficient renal perfusion. The purpose of this study was to evaluate changes in the optimal blood pressure for individual patients, determined during cardiopulmonary bypass (CPB) and during early postoperative period in intensive care unit (ICU). A secondary aim was to examine if excursions below optimal blood pressure in the ICU are associated with risk of cardiac surgery-associated acute kidney injury (CSA-AKI). One hundred and ten patients undergoing cardiac surgery had cerebral blood flow monitored with a novel technology using ultrasound tagged near infrared spectroscopy (UT-NIRS) during CPB and in the first 3 h after surgery in the ICU. The correlation flow index (CFx) was calculated as a moving, linear correlation coefficient between cerebral flow index measured using UT-NIRS and mean arterial pressure (MAP). Optimal blood pressure was defined as the MAP with the lowest CFx. Changes in optimal blood pressure in the perioperative period were observed and the association of blood pressure excursions (magnitude and duration) below the optimal blood pressure [area under the curve (AUC) < OptMAP mmHgxh] with incidence of CSA-AKI (defined using Kidney Disease: Improving Global Outcomes criteria) was examined. Optimal blood pressure during early ICU stay and CPB was correlated (r = 0.46, P < 0.0001), but was significantly higher in the ICU compared with during CPB (75 ± 8.7 vs 71 ± 10.3 mmHg, P = 0.0002). Thirty patients (27.3%) developed CSA-AKI within 48 h after the surgery. AUC < OptMAP was associated with CSA-AKI during CPB [median, 13.27 mmHgxh, interquartile range (IQR), 4.63-20.14 vs median, 6.05 mmHgxh, IQR 3.03-12.40, P = 0.008], and in the ICU (13.72 mmHgxh, IQR 5.09-25.54 vs 5.65 mmHgxh, IQR 1.71-13.07, P = 0.022). Optimal blood pressure during CPB and in the ICU was correlated. Excursions below optimal blood pressure (AUC < OptMAP mmHgXh) during perioperative period are associated with CSA-AKI. Individualized blood pressure management based on cerebral autoregulation monitoring during the perioperative period may help improve CSA-AKI-related outcomes. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

Top