Customised City Maps in Mobile Applications for Senior Citizens.
Reins, Frank; Berker, Frank; Heck, Helmut
2017-01-01
Map services should be used in mobile applications for senior citizens. Do the commonly used map services meet the needs of elderly people? - Exemplarily, the contrast ratios of common maps in comparison to an optimized custom rendered map are examined in the paper.
Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P
2018-02-20
The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.
A multi-resolution approach for optimal mass transport
NASA Astrophysics Data System (ADS)
Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen
2007-09-01
Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.
Application of Contraction Mappings to the Control of Nonlinear Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Killingsworth, W. R., Jr.
1972-01-01
The theoretical and applied aspects of successive approximation techniques are considered for the determination of controls for nonlinear dynamical systems. Particular emphasis is placed upon the methods of contraction mappings and modified contraction mappings. It is shown that application of the Pontryagin principle to the optimal nonlinear regulator problem results in necessary conditions for optimality in the form of a two point boundary value problem (TPBVP). The TPBVP is represented by an operator equation and functional analytic results on the iterative solution of operator equations are applied. The general convergence theorems are translated and applied to those operators arising from the optimal regulation of nonlinear systems. It is shown that simply structured matrices and similarity transformations may be used to facilitate the calculation of the matrix Green functions and the evaluation of the convergence criteria. A controllability theory based on the integral representation of TPBVP's, the implicit function theorem, and contraction mappings is developed for nonlinear dynamical systems. Contraction mappings are theoretically and practically applied to a nonlinear control problem with bounded input control and the Lipschitz norm is used to prove convergence for the nondifferentiable operator. A dynamic model representing community drug usage is developed and the contraction mappings method is used to study the optimal regulation of the nonlinear system.
3D nonrigid registration via optimal mass transport on the GPU.
Ur Rehman, Tauseef; Haber, Eldad; Pryor, Gallagher; Melonakos, John; Tannenbaum, Allen
2009-12-01
In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.
An optimization method of VON mapping for energy efficiency and routing in elastic optical networks
NASA Astrophysics Data System (ADS)
Liu, Huanlin; Xiong, Cuilian; Chen, Yong; Li, Changping; Chen, Derun
2018-03-01
To improve resources utilization efficiency, network virtualization in elastic optical networks has been developed by sharing the same physical network for difference users and applications. In the process of virtual nodes mapping, longer paths between physical nodes will consume more spectrum resources and energy. To address the problem, we propose a virtual optical network mapping algorithm called genetic multi-objective optimize virtual optical network mapping algorithm (GM-OVONM-AL), which jointly optimizes the energy consumption and spectrum resources consumption in the process of virtual optical network mapping. Firstly, a vector function is proposed to balance the energy consumption and spectrum resources by optimizing population classification and crowding distance sorting. Then, an adaptive crossover operator based on hierarchical comparison is proposed to improve search ability and convergence speed. In addition, the principle of the survival of the fittest is introduced to select better individual according to the relationship of domination rank. Compared with the spectrum consecutiveness-opaque virtual optical network mapping-algorithm and baseline-opaque virtual optical network mapping algorithm, simulation results show the proposed GM-OVONM-AL can achieve the lowest bandwidth blocking probability and save the energy consumption.
Grist, Eric P M; Flegg, Jennifer A; Humphreys, Georgina; Mas, Ignacio Suay; Anderson, Tim J C; Ashley, Elizabeth A; Day, Nicholas P J; Dhorda, Mehul; Dondorp, Arjen M; Faiz, M Abul; Gething, Peter W; Hien, Tran T; Hlaing, Tin M; Imwong, Mallika; Kindermans, Jean-Marie; Maude, Richard J; Mayxay, Mayfong; McDew-White, Marina; Menard, Didier; Nair, Shalini; Nosten, Francois; Newton, Paul N; Price, Ric N; Pukrittayakamee, Sasithon; Takala-Harrison, Shannon; Smithuis, Frank; Nguyen, Nhien T; Tun, Kyaw M; White, Nicholas J; Witkowski, Benoit; Woodrow, Charles J; Fairhurst, Rick M; Sibley, Carol Hopkins; Guerin, Philippe J
2016-10-24
Artemisinin-resistant Plasmodium falciparum malaria parasites are now present across much of mainland Southeast Asia, where ongoing surveys are measuring and mapping their spatial distribution. These efforts require substantial resources. Here we propose a generic 'smart surveillance' methodology to identify optimal candidate sites for future sampling and thus map the distribution of artemisinin resistance most efficiently. The approach uses the 'uncertainty' map generated iteratively by a geostatistical model to determine optimal locations for subsequent sampling. The methodology is illustrated using recent data on the prevalence of the K13-propeller polymorphism (a genetic marker of artemisinin resistance) in the Greater Mekong Subregion. This methodology, which has broader application to geostatistical mapping in general, could improve the quality and efficiency of drug resistance mapping and thereby guide practical operations to eliminate malaria in affected areas.
Optimal Mass Transport for Shape Matching and Comparison
Su, Zhengyu; Wang, Yalin; Shi, Rui; Zeng, Wei; Sun, Jian; Luo, Feng; Gu, Xianfeng
2015-01-01
Surface based 3D shape analysis plays a fundamental role in computer vision and medical imaging. This work proposes to use optimal mass transport map for shape matching and comparison, focusing on two important applications including surface registration and shape space. The computation of the optimal mass transport map is based on Monge-Brenier theory, in comparison to the conventional method based on Monge-Kantorovich theory, this method significantly improves the efficiency by reducing computational complexity from O(n2) to O(n). For surface registration problem, one commonly used approach is to use conformal map to convert the shapes into some canonical space. Although conformal mappings have small angle distortions, they may introduce large area distortions which are likely to cause numerical instability thus resulting failures of shape analysis. This work proposes to compose the conformal map with the optimal mass transport map to get the unique area-preserving map, which is intrinsic to the Riemannian metric, unique, and diffeomorphic. For shape space study, this work introduces a novel Riemannian framework, Conformal Wasserstein Shape Space, by combing conformal geometry and optimal mass transport theory. In our work, all metric surfaces with the disk topology are mapped to the unit planar disk by a conformal mapping, which pushes the area element on the surface to a probability measure on the disk. The optimal mass transport provides a map from the shape space of all topological disks with metrics to the Wasserstein space of the disk and the pullback Wasserstein metric equips the shape space with a Riemannian metric. We validate our work by numerous experiments and comparisons with prior approaches and the experimental results demonstrate the efficiency and efficacy of our proposed approach. PMID:26440265
MaMR: High-performance MapReduce programming model for material cloud applications
NASA Astrophysics Data System (ADS)
Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng
2017-02-01
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.
Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua
2017-07-01
Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby
On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less
NASA Astrophysics Data System (ADS)
Chandra, Rishabh
Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.
Metric Optimization for Surface Analysis in the Laplace-Beltrami Embedding Space
Lai, Rongjie; Wang, Danny J.J.; Pelletier, Daniel; Mohr, David; Sicotte, Nancy; Toga, Arthur W.
2014-01-01
In this paper we present a novel approach for the intrinsic mapping of anatomical surfaces and its application in brain mapping research. Using the Laplace-Beltrami eigen-system, we represent each surface with an isometry invariant embedding in a high dimensional space. The key idea in our system is that we realize surface deformation in the embedding space via the iterative optimization of a conformal metric without explicitly perturbing the surface or its embedding. By minimizing a distance measure in the embedding space with metric optimization, our method generates a conformal map directly between surfaces with highly uniform metric distortion and the ability of aligning salient geometric features. Besides pairwise surface maps, we also extend the metric optimization approach for group-wise atlas construction and multi-atlas cortical label fusion. In experimental results, we demonstrate the robustness and generality of our method by applying it to map both cortical and hippocampal surfaces in population studies. For cortical labeling, our method achieves excellent performance in a cross-validation experiment with 40 manually labeled surfaces, and successfully models localized brain development in a pediatric study of 80 subjects. For hippocampal mapping, our method produces much more significant results than two popular tools on a multiple sclerosis study of 109 subjects. PMID:24686245
Networking for large-scale science: infrastructure, provisioning, transport and application mapping
NASA Astrophysics Data System (ADS)
Rao, Nageswara S.; Carter, Steven M.; Wu, Qishi; Wing, William R.; Zhu, Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M.
2005-01-01
Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.
Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.
Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue
2015-08-20
Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.
Yin, Changchuan
2015-04-01
To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.
1994-02-01
desired that the problem to which the design space mapping techniques were applied be easily analyzed, yet provide a design space with realistic complexity...consistent fully stressed solution. 3 DESIGN SPACE MAPPING In order to reduce the computational expense required to optimize design spaces, neural networks...employed in this study. Some of the issues involved in using neural networks to do design space mapping are how to configure the neural network, how much
Development of management information system for land in mine area based on MapInfo
NASA Astrophysics Data System (ADS)
Wang, Shi-Dong; Liu, Chuang-Hua; Wang, Xin-Chuang; Pan, Yan-Yu
2008-10-01
MapInfo is current a popular GIS software. This paper introduces characters of MapInfo and GIS second development methods offered by MapInfo, which include three ones based on MapBasic, OLE automation, and MapX control usage respectively. Taking development of land management information system in mine area for example, in the paper, the method of developing GIS applications based on MapX has been discussed, as well as development of land management information system in mine area has been introduced in detail, including development environment, overall design, design and realization of every function module, and simple application of system, etc. The system uses MapX 5.0 and Visual Basic 6.0 as development platform, takes SQL Server 2005 as back-end database, and adopts Matlab 6.5 to calculate number in back-end. On the basis of integrated design, the system develops eight modules including start-up, layer control, spatial query, spatial analysis, data editing, application model, document management, results output. The system can be used in mine area for cadastral management, land use structure optimization, land reclamation, land evaluation, analysis and forecasting for land in mine area and environmental disruption, thematic mapping, and so on.
Optimal stimulus scheduling for active estimation of evoked brain networks.
Kafashan, MohammadMehdi; Ching, ShiNung
2015-12-01
We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.
Optimal stimulus scheduling for active estimation of evoked brain networks
NASA Astrophysics Data System (ADS)
Kafashan, MohammadMehdi; Ching, ShiNung
2015-12-01
Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.
ERIC Educational Resources Information Center
Chorpita, Bruce F.; Bernstein, Adam; Daleiden, Eric L.
2011-01-01
Objective: Despite substantial progress in the development and identification of psychosocial evidence-based treatments (EBTs) in mental health, there is minimal empirical guidance for selecting an optimal "set" of EBTs maximally applicable and generalizable to a chosen service sample. Relevance mapping is a proposed methodology that…
Cloud GPU-based simulations for SQUAREMR.
Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H
2017-01-01
Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.
A novel algorithm for fast and efficient multifocus wavefront shaping
NASA Astrophysics Data System (ADS)
Fayyaz, Zahra; Nasiriavanaki, Mohammadreza
2018-02-01
Wavefront shaping using spatial light modulator (SLM) is a popular method for focusing light through a turbid media, such as biological tissues. Usually, in iterative optimization methods, due to the very large number of pixels in SLM, larger pixels are formed, bins, and the phase value of the bins are changed to obtain an optimum phase map, hence a focus. In this study an efficient optimization algorithm is proposed to obtain an arbitrary map of focus utilizing all the SLM pixels or small bin sizes. The application of such methodology in dermatology, hair removal in particular, is explored and discussed
Application of evolutionary computation in ECAD problems
NASA Astrophysics Data System (ADS)
Lee, Dae-Hyun; Hwang, Seung H.
1998-10-01
Design of modern electronic system is a complicated task which demands the use of computer- aided design (CAD) tools. Since a lot of problems in ECAD are combinatorial optimization problems, evolutionary computations such as genetic algorithms and evolutionary programming have been widely employed to solve those problems. We have applied evolutionary computation techniques to solve ECAD problems such as technology mapping, microcode-bit optimization, data path ordering and peak power estimation, where their benefits are well observed. This paper presents experiences and discusses issues in those applications.
USDA-ARS?s Scientific Manuscript database
Multiplexed single nucleotide polymorphism (SNP) markers have the potential to increase the speed and cost-effectiveness of genotyping, provided that an optimal SNP density is used for each application. To test the efficiency of multiplexed SNP genotyping for diversity, mapping and breeding applicat...
High-throughput SNP genotyping for breeding applications in rice using the BeadXpress platform
USDA-ARS?s Scientific Manuscript database
Multiplexed single nucleotide polymorphism (SNP) markers have the potential to increase the speed and cost-effectiveness of genotyping, provided that an optimal SNP density is used for each application. To test the efficiency of multiplexed SNP genotyping for diversity, mapping and breeding applicat...
Mapping the Future: Optimizing Joint Geospatial Engineering Support
2006-05-16
Environment. Maxwell Air Force Base, AL.: Air University, 1990. Babbage , Ross and Desmond Ball. Geographic Information Systems: Defence Applications...Joint Pub 4-04. Washington, DC: 27 September 2001. Wertz, Charles J. The Data Dictionary, Concepts and Uses. Wellesley, MA: QED Information...Force Defense Mapping for Future Operations, Washington, DC: September 1995, 1-7. 18 Charles J. Wertz, The Data Dictionary, Concepts and Uses
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method
NASA Astrophysics Data System (ADS)
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.
NASA Astrophysics Data System (ADS)
Gao, Wei; Zhu, Linli; Wang, Kaiyun
2015-12-01
Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.
[Design of an microwave applicator using for tumor in superficial layer].
Sun, Bing; Lu, Xiaofeng; Cao, Yi
2010-05-01
A 2.45 GHz microstrip applicator using single rectangle sheet structure is presented. Based on the radiant principle of microstrip antenna, the applicator's parameter is designed and the simulating model is set and optimized in HFSS. Measured by network analyzer, the technical target of this applicator is complied with design demand. During irradiation experiment, based on 30 W power, 30 mm radiation distance and 15 min duration experiment condition, the thermal field distribution map of phantom is obtained from the far-infrared image instrument. The 3D map shows that the region of thermal field centre has small radius and deep heat penetration. The microwave energy from this applicator can reach the tumor in superficial layer without heat injuring normal tissue around it.
Statistical Significance of Optical Map Alignments
Sarkar, Deepayan; Goldstein, Steve; Schwartz, David C.
2012-01-01
Abstract The Optical Mapping System constructs ordered restriction maps spanning entire genomes through the assembly and analysis of large datasets comprising individually analyzed genomic DNA molecules. Such restriction maps uniquely reveal mammalian genome structure and variation, but also raise computational and statistical questions beyond those that have been solved in the analysis of smaller, microbial genomes. We address the problem of how to filter maps that align poorly to a reference genome. We obtain map-specific thresholds that control errors and improve iterative assembly. We also show how an optimal self-alignment score provides an accurate approximation to the probability of alignment, which is useful in applications seeking to identify structural genomic abnormalities. PMID:22506568
Kong, Bingxin; Liu, Siqi; Yin, Jie; Li, Shengru; Zhu, Zuqing
2018-05-28
Nowadays, it is common for service providers (SPs) to leverage hybrid clouds to improve the quality-of-service (QoS) of their Big Data applications. However, for achieving guaranteed latency and/or bandwidth in its hybrid cloud, an SP might desire to have a virtual datacenter (vDC) network, in which it can manage and manipulate the network connections freely. To address this requirement, we design and implement a network slicing and orchestration (NSO) system that can create and expand vDCs across optical/packet domains on-demand. Considering Hadoop MapReduce (M/R) as the use-case, we describe the proposed architectures of the system's data, control and management planes, and present the operation procedures for creating, expanding, monitoring and managing a vDC for M/R optimization. The proposed NSO system is then realized in a small-scale network testbed that includes four optical/packet domains, and we conduct experiments in it to demonstrate the whole operations of the data, control and management planes. Our experimental results verify that application-driven on-demand vDC expansion across optical/packet domains can be achieved for M/R optimization, and after being provisioned with a vDC, the SP using the NSO system can fully control the vDC network and further optimize the M/R jobs in it with network orchestration.
Cappel, Daniel; Sherman, Woody; Beuming, Thijs
2017-01-01
The ability to accurately characterize the solvation properties (water locations and thermodynamics) of biomolecules is of great importance to drug discovery. While crystallography, NMR, and other experimental techniques can assist in determining the structure of water networks in proteins and protein-ligand complexes, most water molecules are not fully resolved and accurately placed. Furthermore, understanding the energetic effects of solvation and desolvation on binding requires an analysis of the thermodynamic properties of solvent involved in the interaction between ligands and proteins. WaterMap is a molecular dynamics-based computational method that uses statistical mechanics to describe the thermodynamic properties (entropy, enthalpy, and free energy) of water molecules at the surface of proteins. This method can be used to assess the solvent contributions to ligand binding affinity and to guide lead optimization. In this review, we provide a comprehensive summary of published uses of WaterMap, including applications to lead optimization, virtual screening, selectivity analysis, ligand pose prediction, and druggability assessment. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Bernard, Elyse D; Nguyen, Kathy C; DeRosa, Maria C; Tayabali, Azam F; Aranda-Rodriguez, Rocio
2017-01-01
Aptamers are short oligonucleotide sequences used in detection systems because of their high affinity binding to a variety of macromolecules. With the introduction of aptamers over 25 years ago came the exploration of their use in many different applications as a substitute for antibodies. Aptamers have several advantages; they are easy to synthesize, can bind to analytes for which it is difficult to obtain antibodies, and in some cases bind better than antibodies. As such, aptamer applications have significantly expanded as an adjunct to a variety of different immunoassay designs. The Multiple-Analyte Profiling (xMAP) technology developed by Luminex Corporation commonly uses antibodies for the detection of analytes in small sample volumes through the use of fluorescently coded microbeads. This technology permits the simultaneous detection of multiple analytes in each sample tested and hence could be applied in many research fields. Although little work has been performed adapting this technology for use with apatmers, optimizing aptamer-based xMAP assays would dramatically increase the versatility of analyte detection. We report herein on the development of an xMAP bead-based aptamer/antibody sandwich assay for a biomarker of inflammation (C-reactive protein or CRP). Protocols for the coupling of aptamers to xMAP beads, validation of coupling, and for an aptamer/antibody sandwich-type assay for CRP are detailed. The optimized conditions, protocols and findings described in this research could serve as a starting point for the development of new aptamer-based xMAP assays.
A Numerical Study of New Logistic Map
NASA Astrophysics Data System (ADS)
Khmou, Youssef
In this paper, we propose a new logistic map based on the relation of the information entropy, we study the bifurcation diagram comparatively to the standard logistic map. In the first part, we compare the obtained diagram, by numerical simulations, with that of the standard logistic map. It is found that the structures of both diagrams are similar where the range of the growth parameter is restricted to the interval [0,e]. In the second part, we present an application of the proposed map in traffic flow using macroscopic model. It is found that the bifurcation diagram is an exact model of the Greenberg’s model of traffic flow where the growth parameter corresponds to the optimal velocity and the random sequence corresponds to the density. In the last part, we present a second possible application of the proposed map which consists of random number generation. The results of the analysis show that the excluded initial values of the sequences are (0,1).
Ant Colony Optimization for Mapping, Scheduling and Placing in Reconfigurable Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrandi, Fabrizio; Lanzi, Pier Luca; Pilato, Christian
Modern heterogeneous embedded platforms, com- posed of several digital signal, application specific and general purpose processors, also include reconfigurable devices support- ing partial dynamic reconfiguration. These devices can change the behavior of some of their parts during execution, allowing hardware acceleration of more sections of the applications. Never- theless, partial dynamic reconfiguration imposes severe overheads in terms of latency. For such systems, a critical part of the design phase is deciding on which processing elements (mapping) and when (scheduling) executing a task, but also how to place them on the reconfigurable device to guarantee the most efficient reuse of themore » programmable logic. In this paper we propose an algorithm based on Ant Colony Optimization (ACO) that simultaneously executes the scheduling, the mapping and the linear placing of tasks, hiding reconfiguration overheads through prefetching. Our heuristic gradually constructs solutions and then searches around the best ones, cutting out non-promising areas of the design space. We show how to consider the partial dynamic reconfiguration constraints in the scheduling, placing and mapping problems and compare our formulation to other heuristics that address the same problems. We demonstrate that our proposal is more general and robust, and finds better solutions (16.5% in average) with respect to competing solutions.« less
An Integrated Framework for Parameter-based Optimization of Scientific Workflows.
Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel
2009-01-01
Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.
Drawing road networks with focus regions.
Haunert, Jan-Henrik; Sering, Leon
2011-12-01
Mobile users of maps typically need detailed information about their surroundings plus some context information about remote places. In order to avoid that the map partly gets too dense, cartographers have designed mapping functions that enlarge a user-defined focus region--such functions are sometimes called fish-eye projections. The extra map space occupied by the enlarged focus region is compensated by distorting other parts of the map. We argue that, in a map showing a network of roads relevant to the user, distortion should preferably take place in those areas where the network is sparse. Therefore, we do not apply a predefined mapping function. Instead, we consider the road network as a graph whose edges are the road segments. We compute a new spatial mapping with a graph-based optimization approach, minimizing the square sum of distortions at edges. Our optimization method is based on a convex quadratic program (CQP); CQPs can be solved in polynomial time. Important requirements on the output map are expressed as linear inequalities. In particular, we show how to forbid edge crossings. We have implemented our method in a prototype tool. For instances of different sizes, our method generated output maps that were far less distorted than those generated with a predefined fish-eye projection. Future work is needed to automate the selection of roads relevant to the user. Furthermore, we aim at fast heuristics for application in real-time systems. © 2011 IEEE
Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.
2017-01-01
Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.
2008-09-01
One implication of this is that the instrument can physically resolve satellites at smaller separations than current and existing optical SSA assets...with the potential for 24/7 taskability and near-real time capability. By optimizing an instrument to perform position measurement rather than...sensors. The J-MAPS baseline also includes a novel filter-grating wheel, of interest in the area of non- resolved object characterization. We discuss the
On-line photolithography modeling using spectrophotometry and Prolith/2
NASA Astrophysics Data System (ADS)
Engstrom, Herbert L.; Beacham, Jeanne E.
1994-05-01
Spectrophotometry has been applied to optimizing photolithography processes in semiconductor manufacturing. For many years thin film measurement systems have been used in manufacturing for controlling film deposition processes. The combination of film thickness mapping with photolithography modeling has expanded the applications of this technology. Experimental measurements of dose-to-clear, the minimum light exposure dose required to fully develop a photoresist, are described. It is shown how dose-to-clear and photoresist contrast may be determined rapidly and conveniently from measurements of a dose exposure matrix on a monitor wafer. Such experimental measurements may underestimate the dose-to- clear because of thickness variations of the photoresist and underlying layers on the product wafer. Online modeling of the photolithographic process together with film thickness maps of the entire wafer can overcome this problem. Such modeling also provides maps of dose-to- clear and resist linewidth that can be used to estimate and optimize yield.
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.
Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G
2014-12-01
Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.
Generalized Differential Calculus and Applications to Optimization
NASA Astrophysics Data System (ADS)
Rector, Robert Blake Hayden
This thesis contains contributions in three areas: the theory of generalized calculus, numerical algorithms for operations research, and applications of optimization to problems in modern electric power systems. A geometric approach is used to advance the theory and tools used for studying generalized notions of derivatives for nonsmooth functions. These advances specifically pertain to methods for calculating subdifferentials and to expanding our understanding of a certain notion of derivative of set-valued maps, called the coderivative, in infinite dimensions. A strong understanding of the subdifferential is essential for numerical optimization algorithms, which are developed and applied to nonsmooth problems in operations research, including non-convex problems. Finally, an optimization framework is applied to solve a problem in electric power systems involving a smart solar inverter and battery storage system providing energy and ancillary services to the grid.
A DSS for sustainable development and environmental protection of agricultural regions.
Manos, Basil D; Papathanasiou, Jason; Bournaris, Thomas; Voudouris, Kostas
2010-05-01
This paper presents a decision support system (DSS) for sustainable development and environmental protection of agricultural regions developed in the framework of the Interreg-Archimed project entitled WaterMap (development and utilization of vulnerability maps for the monitoring and management of groundwater resources in the ARCHIMED areas). Its aim is to optimize the production plan of an agricultural region taking in account the available resources, the environmental parameters, and the vulnerability map of the region. The DSS is based on an optimization multicriteria model. The spatial integration of vulnerability maps in the DSS enables regional authorities to design policies for optimal agricultural development and groundwater protection from the agricultural land uses. The DSS can further be used to simulate different scenarios and policies by the local stakeholders due to changes on different social, economic, and environmental parameters. In this way, they can achieve alternative production plans and agricultural land uses as well as to estimate economic, social, and environmental impacts of different policies. The DSS is computerized and supported by a set of relational databases. The corresponding software has been developed in a Microsoft Windows XP platform, using Microsoft Visual Basic, Microsoft Access, and the LINDO library. For demonstration reasons, the paper includes an application of the DSS in a region of Northern Greece.
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
An Application of Multi-Criteria Shortest Path to a Customizable Hex-Map Environment
2015-03-26
forces which could act as intermediate destinations or obstacles to movement through the network. This is similar to a traveling salesman problem ...118 Abstract The shortest path problem of finding the optimal path through a complex network is well-studied in the field of operations research. This...research presents an applica- tion of the shortest path problem to a customizable map with terrain features and enemy engagement risk. The PathFinder
NASA Astrophysics Data System (ADS)
Li, Linyi; Chen, Yun; Yu, Xin; Liu, Rui; Huang, Chang
2015-03-01
The study of flood inundation is significant to human life and social economy. Remote sensing technology has provided an effective way to study the spatial and temporal characteristics of inundation. Remotely sensed images with high temporal resolutions are widely used in mapping inundation. However, mixed pixels do exist due to their relatively low spatial resolutions. One of the most popular approaches to resolve this issue is sub-pixel mapping. In this paper, a novel discrete particle swarm optimization (DPSO) based sub-pixel flood inundation mapping (DPSO-SFIM) method is proposed to achieve an improved accuracy in mapping inundation at a sub-pixel scale. The evaluation criterion for sub-pixel inundation mapping is formulated. The DPSO-SFIM algorithm is developed, including particle discrete encoding, fitness function designing and swarm search strategy. The accuracy of DPSO-SFIM in mapping inundation at a sub-pixel scale was evaluated using Landsat ETM + images from study areas in Australia and China. The results show that DPSO-SFIM consistently outperformed the four traditional SFIM methods in these study areas. A sensitivity analysis of DPSO-SFIM was also carried out to evaluate its performances. It is hoped that the results of this study will enhance the application of medium-low spatial resolution images in inundation detection and mapping, and thereby support the ecological and environmental studies of river basins.
Coordinated Optimization of Visual Cortical Maps (I) Symmetry-based Analysis
Reichl, Lars; Heide, Dominik; Löwel, Siegrid; Crowley, Justin C.; Kaschube, Matthias; Wolf, Fred
2012-01-01
In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps. PMID:23144599
Matiukas, Arvydas; Mitrea, Bogdan G; Qin, Maochun; Pertsov, Arkady M; Shvedko, Alexander G; Warren, Mark D; Zaitsev, Alexey V; Wuskell, Joseph P; Wei, Mei-de; Watras, James; Loew, Leslie M
2007-11-01
Styryl voltage-sensitive dyes (e.g., di-4-ANEPPS) have been used successfully for optical mapping in cardiac cells and tissues. However, their utility for probing electrical activity deep inside the myocardial wall and in blood-perfused myocardium has been limited because of light scattering and high absorption by endogenous chromophores and hemoglobin at blue-green excitation wavelengths. The purpose of this study was to characterize two new styryl dyes--di-4-ANBDQPQ (JPW-6003) and di-4-ANBDQBS (JPW-6033)--optimized for blood-perfused tissue and intramural optical mapping. Voltage-dependent spectra were recorded in a model lipid bilayer. Optical mapping experiments were conducted in four species (mouse, rat, guinea pig, and pig). Hearts were Langendorff perfused using Tyrode's solution and blood (pig). Dyes were loaded via bolus injection into perfusate. Transillumination experiments were conducted in isolated coronary-perfused pig right ventricular wall preparations. The optimal excitation wavelength in cardiac tissues (650 nm) was >70 nm beyond the absorption maximum of hemoglobin. Voltage sensitivity of both dyes was approximately 10% to 20%. Signal decay half-life due to dye internalization was 80 to 210 minutes, which is 5 to 7 times slower than for di-4-ANEPPS. In transillumination mode, DeltaF/F was as high as 20%. In blood-perfused tissues, DeltaF/F reached 5.5% (1.8 times higher than for di-4-ANEPPS). We have synthesized and characterized two new near-infrared dyes with excitation/emission wavelengths shifted >100 nm to the red. They provide both high voltage sensitivity and 5 to 7 times slower internalization rate compared to conventional dyes. The dyes are optimized for deeper tissue probing and optical mapping of blood-perfused tissue, but they also can be used for conventional applications.
Center for Technology for Advanced Scientific Componet Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Govindaraju, Madhusudhan
Advanced Scientific Computing Research Computer Science FY 2010Report Center for Technology for Advanced Scientific Component Software: Distributed CCA State University of New York, Binghamton, NY, 13902 Summary The overall objective of Binghamton's involvement is to work on enhancements of the CCA environment, motivated by the applications and research initiatives discussed in the proposal. This year we are working on re-focusing our design and development efforts to develop proof-of-concept implementations that have the potential to significantly impact scientific components. We worked on developing parallel implementations for non-hydrostatic code and worked on a model coupling interface for biogeochemical computations coded in MATLAB.more » We also worked on the design and implementation modules that will be required for the emerging MapReduce model to be effective for scientific applications. Finally, we focused on optimizing the processing of scientific datasets on multi-core processors. Research Details We worked on the following research projects that we are working on applying to CCA-based scientific applications. 1. Non-Hydrostatic Hydrodynamics: Non-static hydrodynamics are significantly more accurate at modeling internal waves that may be important in lake ecosystems. Non-hydrostatic codes, however, are significantly more computationally expensive, often prohibitively so. We have worked with Chin Wu at the University of Wisconsin to parallelize non-hydrostatic code. We have obtained a speed up of about 26 times maximum. Although this is significant progress, we hope to improve the performance further, such that it becomes a practical alternative to hydrostatic codes. 2. Model-coupling for water-based ecosystems: To answer pressing questions about water resources requires that physical models (hydrodynamics) be coupled with biological and chemical models. Most hydrodynamics codes are written in Fortran, however, while most ecologists work in MATLAB. This disconnect creates a great barrier. To address this, we are working on a model coupling interface that will allow biogeochemical computations written in MATLAB to couple with Fortran codes. This will greatly improve the productivity of ecosystem scientists. 2. Low overhead and Elastic MapReduce Implementation Optimized for Memory and CPU-Intensive Applications: Since its inception, MapReduce has frequently been associated with Hadoop and large-scale datasets. Its deployment at Amazon in the cloud, and its applications at Yahoo! for large-scale distributed document indexing and database building, among other tasks, have thrust MapReduce to the forefront of the data processing application domain. The applicability of the paradigm however extends far beyond its use with data intensive applications and diskbased systems, and can also be brought to bear in processing small but CPU intensive distributed applications. MapReduce however carries its own burdens. Through experiments using Hadoop in the context of diverse applications, we uncovered latencies and delay conditions potentially inhibiting the expected performance of a parallel execution in CPU-intensive applications. Furthermore, as it currently stands, MapReduce is favored for data-centric applications, and as such tends to be solely applied to disk-based applications. The paradigm, falls short in bringing its novelty to diskless systems dedicated to in-memory applications, and compute intensive programs processing much smaller data, but requiring intensive computations. In this project, we focused both on the performance of processing large-scale hierarchical data in distributed scientific applications, as well as the processing of smaller but demanding input sizes primarily used in diskless, and memory resident I/O systems. We designed LEMO-MR [1], a Low overhead, elastic, configurable for in- memory applications, and on-demand fault tolerance, an optimized implementation of MapReduce, for both on disk and in memory applications. We conducted experiments to identify not only the necessary components of this model, but also trade offs and factors to be considered. We have initial results to show the efficacy of our implementation in terms of potential speedup that can be achieved for representative data sets used by cloud applications. We have quantified the performance gains exhibited by our MapReduce implementation over Apache Hadoop in a compute intensive environment. 3. Cache Performance Optimization for Processing XML and HDF-based Application Data on Multi-core Processors: It is important to design and develop scientific middleware libraries to harness the opportunities presented by emerging multi-core processors. Implementations of scientific middleware and applications that do not adapt to the programming paradigm when executing on emerging processors can severely impact the overall performance. In this project, we focused on the utilization of the L2 cache, which is a critical shared resource on chip multiprocessors (CMP). The access pattern of the shared L2 cache, which is dependent on how the application schedules and assigns processing work to each thread, can either enhance or hurt the ability to hide memory latency on a multi-core processor. Therefore, while processing scientific datasets such as HDF5, it is essential to conduct fine-grained analysis of cache utilization, to inform scheduling decisions in multi-threaded programming. In this project, using the TAU toolkit for performance feedback from dual- and quad-core machines, we conducted performance analysis and recommendations on how processing threads can be scheduled on multi-core nodes to enhance the performance of a class of scientific applications that requires processing of HDF5 data. In particular, we quantified the gains associated with the use of the adaptations we have made to the Cache-Affinity and Balanced-Set scheduling algorithms to improve L2 cache performance, and hence the overall application execution time [2]. References: 1. Zacharia Fadika, Madhusudhan Govindaraju, ``MapReduce Implementation for Memory-Based and Processing Intensive Applications'', accepted in 2nd IEEE International Conference on Cloud Computing Technology and Science, Indianapolis, USA, Nov 30 - Dec 3, 2010. 2. Rajdeep Bhowmik, Madhusudhan Govindaraju, ``Cache Performance Optimization for Processing XML-based Application Data on Multi-core Processors'', in proceedings of The 10th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 17-20, 2010, Melbourne, Victoria, Australia. Contact Information: Madhusudhan Govindaraju Binghamton University State University of New York (SUNY) mgovinda@cs.binghamton.edu Phone: 607-777-4904« less
NASA Astrophysics Data System (ADS)
Yarnykh, V.; Korostyshevskaya, A.
2017-08-01
Macromolecular proton fraction (MPF) is a biophysical parameter describing the amount of macromolecular protons involved into magnetization exchange with water protons in tissues. MPF represents a significant interest as a magnetic resonance imaging (MRI) biomarker of myelin for clinical applications. A recent fast MPF mapping method enabled clinical translation of MPF measurements due to time-efficient acquisition based on the single-point constrained fit algorithm. However, previous MPF mapping applications utilized only 3 Tesla MRI scanners and modified pulse sequences, which are not commonly available. This study aimed to test the feasibility of MPF mapping implementation on a 1.5 Tesla clinical scanner using standard manufacturer’s sequences and compare the performance of this method between 1.5 and 3 Tesla scanners. MPF mapping was implemented on 1.5 and 3 Tesla MRI units of one manufacturer with either optimized custom-written or standard product pulse sequences. Whole-brain three-dimensional MPF maps obtained from a single volunteer were compared between field strengths and implementation options. MPF maps demonstrated similar quality at both field strengths. MPF values in segmented brain tissues and specific anatomic regions appeared in close agreement. This experiment demonstrates the feasibility of fast MPF mapping using standard sequences on 1.5 T and 3 T clinical scanners.
Enabling Next-Generation Multicore Platforms in Embedded Applications
2014-04-01
mapping to sets 129 − 256 ) to the second page in memory, color 2 (sets 257 − 384) to the third page, and so on. Then, after the 32nd page, all 212 sets...the Real-Time Nested Locking Protocol (RNLP) [56], a recently developed multiprocessor real-time locking protocol that optimally supports the...RELEASE; DISTRIBUTION UNLIMITED 15 In general, the problems of optimally assigning tasks to processors and colors to tasks are both NP-hard in the
Anatomy of a hash-based long read sequence mapping algorithm for next generation DNA sequencing.
Misra, Sanchit; Agrawal, Ankit; Liao, Wei-keng; Choudhary, Alok
2011-01-15
Recently, a number of programs have been proposed for mapping short reads to a reference genome. Many of them are heavily optimized for short-read mapping and hence are very efficient for shorter queries, but that makes them inefficient or not applicable for reads longer than 200 bp. However, many sequencers are already generating longer reads and more are expected to follow. For long read sequence mapping, there are limited options; BLAT, SSAHA2, FANGS and BWA-SW are among the popular ones. However, resequencing and personalized medicine need much faster software to map these long sequencing reads to a reference genome to identify SNPs or rare transcripts. We present AGILE (AliGnIng Long rEads), a hash table based high-throughput sequence mapping algorithm for longer 454 reads that uses diagonal multiple seed-match criteria, customized q-gram filtering and a dynamic incremental search approach among other heuristics to optimize every step of the mapping process. In our experiments, we observe that AGILE is more accurate than BLAT, and comparable to BWA-SW and SSAHA2. For practical error rates (< 5%) and read lengths (200-1000 bp), AGILE is significantly faster than BLAT, SSAHA2 and BWA-SW. Even for the other cases, AGILE is comparable to BWA-SW and several times faster than BLAT and SSAHA2. http://www.ece.northwestern.edu/~smi539/agile.html.
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunz, Josiah; Snopok, Pavel; Berz, Martin
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochasticmore » nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.« less
Optimization of Feasibility Stage for Hydrogen/Deuterium Exchange Mass Spectrometry
NASA Astrophysics Data System (ADS)
Hamuro, Yoshitomo; Coales, Stephen J.
2018-03-01
The practice of HDX-MS remains somewhat difficult, not only for newcomers but also for veterans, despite its increasing popularity. While a typical HDX-MS project starts with a feasibility stage where the experimental conditions are optimized and the peptide map is generated prior to the HDX study stage, the literature usually reports only the HDX study stage. In this protocol, we describe a few considerations for the initial feasibility stage, more specifically, how to optimize quench conditions, how to tackle the carryover issue, and how to apply the pepsin specificity rule. Two sets of quench conditions are described depending on the presence of disulfide bonds to facilitate the quench condition optimization process. Four protocols are outlined to minimize carryover during the feasibility stage: (1) addition of a detergent to the quench buffer, (2) injection of a detergent or chaotrope to the protease column after each sample injection, (3) back-flushing of the trap column and the analytical column with a new plumbing configuration, and (4) use of PEEK (or PEEK coated) frits instead of stainless steel frits for the columns. The application of the pepsin specificity rule after peptide map generation and not before peptide map generation is suggested. The rule can be used not only to remove falsely identified peptides, but also to check the sample purity. A well-optimized HDX-MS feasibility stage makes subsequent HDX study stage smoother and the resulting HDX data more reliable. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Matgen, Patrick; Giustarini, Laura; Hostache, Renaud
2012-10-01
This paper introduces an automatic flood mapping application that is hosted on the Grid Processing on Demand (GPOD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver operationally flooded areas using both recent and historical acquisitions of SAR data. Having as a short-term target the flooding-related exploitation of data generated by the upcoming ESA SENTINEL-1 SAR mission, the flood mapping application consists of two building blocks: i) a set of query tools for selecting the "crisis image" and the optimal corresponding "reference image" from the G-POD archive and ii) an algorithm for extracting flooded areas via change detection using the previously selected "crisis image" and "reference image". Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate reference image. Potential users will also be able to apply the implemented flood delineation algorithm. The latter combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. Both algorithms are computationally efficient and operate with minimum data requirements. The case study of the high magnitude flooding event that occurred in July 2007 on the Severn River, UK, and that was observed with a moderateresolution SAR sensor as well as airborne photography highlights the performance of the proposed online application. The flood mapping application on G-POD can be used sporadically, i.e. whenever a major flood event occurs and there is a demand for SAR-based flood extent maps. In the long term, a potential extension of the application could consist in systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis.
The application of multiple reaction monitoring and multi-analyte profiling to HDL proteins
2014-01-01
Background HDL carries a rich protein cargo and examining HDL protein composition promises to improve our understanding of its functions. Conventional mass spectrometry methods can be lengthy and difficult to extend to large populations. In addition, without prior enrichment of the sample, the ability of these methods to detect low abundance proteins is limited. Our objective was to develop a high-throughput approach to examine HDL protein composition applicable to diabetes and cardiovascular disease (CVD). Methods We optimized two multiplexed assays to examine HDL proteins using a quantitative immunoassay (Multi-Analyte Profiling- MAP) and mass spectrometric-based quantitative proteomics (Multiple Reaction Monitoring-MRM). We screened HDL proteins using human xMAP (90 protein panel) and MRM (56 protein panel). We extended the application of these two methods to HDL isolated from a group of participants with diabetes and prior cardiovascular events and a group of non-diabetic controls. Results We were able to quantitate 69 HDL proteins using MAP and 32 proteins using MRM. For several common proteins, the use of MRM and MAP was highly correlated (p < 0.01). Using MAP, several low abundance proteins implicated in atherosclerosis and inflammation were found on HDL. On the other hand, MRM allowed the examination of several HDL proteins not available by MAP. Conclusions MAP and MRM offer a sensitive and high-throughput approach to examine changes in HDL proteins in diabetes and CVD. This approach can be used to measure the presented HDL proteins in large clinical studies. PMID:24397693
The application and use of chemical space mapping to interpret crystallization screening results
Snell, Edward H.; Nagel, Ray M.; Wojtaszcyk, Ann; O’Neill, Hugh; Wolfley, Jennifer L.; Luft, Joseph R.
2008-01-01
Macromolecular crystallization screening is an empirical process. It often begins by setting up experiments with a number of chemically diverse cocktails designed to sample chemical space known to promote crystallization. Where a potential crystal is seen a refined screen is set up, optimizing around that condition. By using an incomplete factorial sampling of chemical space to formulate the cocktails and presenting the results graphically, it is possible to readily identify trends relevant to crystallization, coarsely sample the phase diagram and help guide the optimization process. In this paper, chemical space mapping is applied to both single macromolecules and to a diverse set of macromolecules in order to illustrate how visual information is more readily understood and assimilated than the same information presented textually. PMID:19018100
The application and use of chemical space mapping to interpret crystallization screening results.
Snell, Edward H; Nagel, Ray M; Wojtaszcyk, Ann; O'Neill, Hugh; Wolfley, Jennifer L; Luft, Joseph R
2008-12-01
Macromolecular crystallization screening is an empirical process. It often begins by setting up experiments with a number of chemically diverse cocktails designed to sample chemical space known to promote crystallization. Where a potential crystal is seen a refined screen is set up, optimizing around that condition. By using an incomplete factorial sampling of chemical space to formulate the cocktails and presenting the results graphically, it is possible to readily identify trends relevant to crystallization, coarsely sample the phase diagram and help guide the optimization process. In this paper, chemical space mapping is applied to both single macromolecules and to a diverse set of macromolecules in order to illustrate how visual information is more readily understood and assimilated than the same information presented textually.
Direct aperture optimization: a turnkey solution for step-and-shoot IMRT.
Shepard, D M; Earl, M A; Li, X A; Naqvi, S; Yu, C
2002-06-01
IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.
NASA Astrophysics Data System (ADS)
Stuffler, Timo; Förster, Klaus; Hofer, Stefan; Leipold, Manfred; Sang, Bernhard; Kaufmann, Hermann; Penné, Boris; Mueller, Andreas; Chlebek, Christian
2009-10-01
In the upcoming generation of satellite sensors, hyperspectral instruments will play a significant role. This payload type is considered world-wide within different future planning. Our team has now successfully finalized the Phase B study for the advanced hyperspectral mission EnMAP (Environmental Mapping and Analysis Programme), Germans next optical satellite being scheduled for launch in 2012. GFZ in Potsdam has the scientific lead on EnMAP, Kayser-Threde in Munich is the industrial prime. The EnMAP instrument provides over 240 continuous spectral bands in the wavelength range between 420 and 2450 nm with a ground resolution of 30 m×30 m. Thus, the broad science and application community can draw from an extensive and highly resolved pool of information supporting the modeling and optimization process on their results. The performance of the hyperspectral instrument allows for a detailed monitoring, characterization and parameter extraction of rock/soil targets, vegetation, and inland and coastal waters on a global scale supporting a wide variety of applications in agriculture, forestry, water management and geology. The operation of an airborne system (ARES) as an element in the HGF hyperspectral network and the ongoing evolution concerning data handling and extraction procedures, will support the later inclusion process of EnMAP into the growing scientist and user communities.
Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.
Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews
2015-03-01
This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale
Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...
2017-01-26
Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.
Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less
Maximally informative pairwise interactions in networks
Fitzgerald, Jeffrey D.; Sharpee, Tatyana O.
2010-01-01
Several types of biological networks have recently been shown to be accurately described by a maximum entropy model with pairwise interactions, also known as the Ising model. Here we present an approach for finding the optimal mappings between input signals and network states that allow the network to convey the maximal information about input signals drawn from a given distribution. This mapping also produces a set of linear equations for calculating the optimal Ising-model coupling constants, as well as geometric properties that indicate the applicability of the pairwise Ising model. We show that the optimal pairwise interactions are on average zero for Gaussian and uniformly distributed inputs, whereas they are nonzero for inputs approximating those in natural environments. These nonzero network interactions are predicted to increase in strength as the noise in the response functions of each network node increases. This approach also suggests ways for how interactions with unmeasured parts of the network can be inferred from the parameters of response functions for the measured network nodes. PMID:19905153
Minimization for conditional simulation: Relationship to optimal transport
NASA Astrophysics Data System (ADS)
Oliver, Dean S.
2014-05-01
In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.
NASA Astrophysics Data System (ADS)
Lo, Mei-Chun; Hsieh, Tsung-Hsien; Perng, Ruey-Kuen; Chen, Jiong-Qiao
2010-01-01
The aim of this research is to derive illuminant-independent type of HDR imaging modules which can optimally multispectrally reconstruct of every color concerned in high-dynamic-range of original images for preferable cross-media color reproduction applications. Each module, based on either of broadband and multispectral approach, would be incorporated models of perceptual HDR tone-mapping, device characterization. In this study, an xvYCC format of HDR digital camera was used to capture HDR scene images for test. A tone-mapping module was derived based on a multiscale representation of the human visual system and used equations similar to a photoreceptor adaptation equation, proposed by Michaelis-Menten. Additionally, an adaptive bilateral type of gamut mapping algorithm, using approach of a multiple conversing-points (previously derived), was incorporated with or without adaptive Un-sharp Masking (USM) to carry out the optimization of HDR image rendering. An LCD with standard color space of Adobe RGB (D65) was used as a soft-proofing platform to display/represent HDR original RGB images, and also evaluate both renditionquality and prediction-performance of modules derived. Also, another LCD with standard color space of sRGB was used to test gamut-mapping algorithms, used to be integrated with tone-mapping module derived.
NASA Astrophysics Data System (ADS)
van Loon, E. G. C. P.; Schüler, M.; Katsnelson, M. I.; Wehling, T. O.
2016-10-01
We investigate the Peierls-Feynman-Bogoliubov variational principle to map Hubbard models with nonlocal interactions to effective models with only local interactions. We study the renormalization of the local interaction induced by nearest-neighbor interaction and assess the quality of the effective Hubbard models in reproducing observables of the corresponding extended Hubbard models. We compare the renormalization of the local interactions as obtained from numerically exact determinant quantum Monte Carlo to approximate but more generally applicable calculations using dual boson, dynamical mean field theory, and the random phase approximation. These more approximate approaches are crucial for any application with real materials in mind. Furthermore, we use the dual boson method to calculate observables of the extended Hubbard models directly and benchmark these against determinant quantum Monte Carlo simulations of the effective Hubbard model.
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Classification of wetlands vegetation using small scale color infrared imagery
NASA Technical Reports Server (NTRS)
Williamson, F. S. L.
1975-01-01
A classification system for Chesapeake Bay wetlands was derived from the correlation of film density classes and actual vegetation classes. The data processing programs used were developed by the Laboratory for the Applications of Remote Sensing. These programs were tested for their value in classifying natural vegetation, using digitized data from small scale aerial photography. Existing imagery and the vegetation map of Farm Creek Marsh were used to determine the optimal number of classes, and to aid in determining if the computer maps were a believable product.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Supinski, B.; Caliga, D.
2017-09-28
The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.
A Comprehensive Multi-Level Model for Campus-Based Leadership Education
ERIC Educational Resources Information Center
Rosch, David; Spencer, Gayle L.; Hoag, Beth L.
2017-01-01
Within this application brief, we propose a comprehensive model for mapping the shape and optimizing the effectiveness of leadership education in campus-wide university settings. The four-level model is highlighted by inclusion of a philosophy statement detailing the values and purpose of leadership education on campus, a set of skills and…
Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT
2017-01-01
Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment. PMID:29181020
Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.
Nie, Xiaohua; Wang, Wei; Nie, Haoyao
2017-01-01
Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.
NASA Technical Reports Server (NTRS)
Spruce, Joe
2001-01-01
Yellowstone National Park (YNP) contains a diversity of land cover. YNP managers need site-specific land cover maps, which may be produced more effectively using high-resolution hyperspectral imagery. ISODATA clustering techniques have aided operational multispectral image classification and may benefit certain hyperspectral data applications if optimally applied. In response, a study was performed for an area in northeast YNP using 11 select bands of low-altitude AVIRIS data calibrated to ground reflectance. These data were subjected to ISODATA clustering and Maximum Likelihood Classification techniques to produce a moderately detailed land cover map. The latter has good apparent overall agreement with field surveys and aerial photo interpretation.
Application of OpenStreetMap (OSM) to Support the Mapping Village in Indonesia
NASA Astrophysics Data System (ADS)
Swasti Kanthi, Nurin; Hery Purwanto, Taufik
2016-11-01
Geospatial Information is a important thing in this era, because the need for location information is needed to know the condition of a region. In 2015 the Indonesian government release detailed mapping in village level and their Parent maps Indonesian state regulatory standards set forth in Rule form Norm Standards, Procedures and Criteria for Mapping Village (NSPK). Over time Web and Mobile GIS was developed with a wide range of applications. The merger between detailed mapping and Web GIS is still rarely performed and not used optimally. OpenStreetMap (OSM) is a WebGIS which can be utilized as Mobile GIS providing sufficient information to the representative levels of the building and can be used for mapping the village.Mapping Village using OSM was conducted using remote sensing approach and Geographical Information Systems (GIS), which's to interpret remote sensing imagery from OSM. The study was conducted to analyzed how far the role of OSM to support the mapping of the village, it's done by entering the house number data, administrative boundaries, public facilities and land use into OSM with reference data and data image Village Plan. The results of the mapping portion villages in OSM as a reference map-making village and analyzed in accordance with NSPK for detailed mapping Rukun Warga (RW) is part of the village mapping. The use of OSM greatly assists the process of mapping the details of the region with data sources in the form of images and can be accessed for Open Source. But still need their care and updating the data source to maintain the validity of the data.
Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Michelson, Diane K.
2017-01-01
During conceptual design speed and accuracy are often at odds. Specifically in the realm of launch vehicles, optimizing the ascent trajectory requires a larger pool of analytical power and expertise. Experienced analysts working on familiar vehicles can produce optimal trajectories in a short time frame, however whenever either "experienced" or "familiar " is not applicable the optimization process can become quite lengthy. In order to construct a vehicle agnostic method an established global optimization algorithm is needed. In this work the authors develop an "artificial" error term to map arbitrary control vectors to non-zero error by which a global method can operate. Two global methods are compared alongside Design of Experiments and random sampling and are shown to produce comparable results to analysis done by a human expert.
Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Cox, Cary M.
This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work also explores the concept of an edge within hyperspectral space, the relative importance of spatial and spectral resolutions as they pertain to HSI edge detection and how effectively compressed HSI data improves edge detection results. The HSI edge detection experiments yielded valuable insights into the algorithms' strengths, weaknesses and optimal alignment to remote sensing applications. The gradient-based edge operator produced strong edge planes across a range of evaluation measures and applications, particularly with respect to false negatives, unbroken edges, urban mapping, vegetation mapping and oil spill mapping applications. False positives and uncompressed HSI data presented occasional challenges to the algorithm. The HySPADE edge operator produced satisfactory results with respect to localization, single-point response, oil spill mapping and trace chemical detection, and was challenged by false positives, declining spectral resolution and vegetation mapping applications. The level set edge detector produced high-quality edge planes for most tests and demonstrated strong performance with respect to false positives, single-point response, oil spill mapping and mineral mapping. False negatives were a regular challenge for the level set edge detection algorithm. Finally, HSI data optimized for spectral information compression and noise was shown to improve edge detection performance across all three algorithms, while the gradient-based algorithm and HySPADE demonstrated significant robustness to declining spectral and spatial resolutions.
NASA Astrophysics Data System (ADS)
Szabó, S.; Bódis, K.; Huld, T.; Moner-Girona, M.
2011-07-01
Three rural electrification options are analysed showing the cost optimal conditions for a sustainable energy development applying renewable energy sources in Africa. A spatial electricity cost model has been designed to point out whether diesel generators, photovoltaic systems or extension of the grid are the least-cost option in off-grid areas. The resulting mapping application offers support to decide in which regions the communities could be electrified either within the grid or in an isolated mini-grid. Donor programs and National Rural Electrification Agencies (or equivalent governmental departments) could use this type of delineation for their program boundaries and then could use the local optimization tools adapted to the prevailing parameters. The views expressed in this paper are those of the authors and do not necessarily represent European Commission and UNEP policy.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data
2014-01-01
Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189
Demeestere, K; Smet, E; Van Langenhove, H; Galbacs, Z
2001-12-01
Among the physico-chemical abatement technologies, mainly acid scrubbers have been used to control NH3-emission. The disadvantage of this technique is that it yields waste water, highly concentrated in ammonia. In this report, the applicability of the magnesium ammonium phosphate (MAP) process to regenerate the liquid phase, produced by scrubbing NH3-loaded waste gases, was investigated. In the MAP process, ammonium is precipitated as magnesium ammonium phosphate, which can be used as a slow release fertilizer. The influence of a number of parameters, e.g. pH, kinetics, molar ratio NH(+)4/Mg2+/PO(3-)4 on the efficiency of the formation of MAP and on the ammonium removal efficiency was investigated. In this way, optimal conditions were determined for the precipitation reaction. Next to this, interference caused by other precipitation reactions was studied. At aqueous NH(+)4-concentrations of about 600 mg l(-1), ammonium removal efficiencies of 97% could be obtained at a molar ratio NH(+)4/Mg2+/PO(3-)4 of 1/1.5/1.5. To obtain this result, the pH was continuously adjusted to a value of 9 during the reaction. According to this study, it is obvious that the MAP-precipitation technology offers opportunities for ammonium removal from scrubbing liquids. The practical applicability of the MAP-process in waste gas treatment systems, however, should be the subject for further investigations.
Texture mapping via optimal mass transport.
Dominitz, Ayelet; Tannenbaum, Allen
2010-01-01
In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the "earth-mover's metric"). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.
NASA Technical Reports Server (NTRS)
Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew
2017-01-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2017-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2016-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
Cluster categorization of urban roads to optimize their noise monitoring.
Zambon, G; Benocci, R; Brambilla, G
2016-01-01
Road traffic in urban areas is recognized to be associated with urban mobility and public health, and it is often the main source of noise pollution. Lately, noise maps have been considered a powerful tool to estimate the population exposure to environmental noise, but they need to be validated by measured noise data. The project Dynamic Acoustic Mapping (DYNAMAP), co-funded in the framework of the LIFE 2013 program, is aimed to develop a statistically based method to optimize the choice and the number of monitoring sites and to automate the noise mapping update using the data retrieved from a low-cost monitoring network. Indeed, the first objective should improve the spatial sampling based on the legislative road classification, as this classification is mainly based on the geometrical characteristics of the road, rather than its noise emission. The present paper describes the statistical approach of the methodology under development and the results of its preliminary application to a limited sample of roads in the city of Milan. The resulting categorization of roads, based on clustering the 24-h hourly L Aeqh, looks promising to optimize the spatial sampling of noise monitoring toward a description of the noise pollution due to complex urban road networks more efficient than that based on the legislative road classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Chan Tseung, H; Ma, J; Ma, D
2015-06-15
Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based biological planning for the treatment of thyroid tumors in spot-scanning proton therapy. Methods: Recently, we developed a fast and accurate GPU-based MC simulation of proton transport that was benchmarked against Geant4.9.6 and used as the dose calculation engine in a clinically-applicable GPU-accelerated IMPT optimizer. Besides dose, it can simultaneously score the dose-averaged LET (LETd), which makes fast biological dose (BD) estimates possible. To convert from LETd to BD, we used a linear relation based on cellular irradiation data. Given a thyroid patient with a 93cc tumor volume, we createdmore » a 2-field IMPT plan in Eclipse (Varian Medical Systems). This plan was re-calculated with our MC to obtain the BD distribution. A second 5-field plan was made with our in-house optimizer, using pre-generated MC dose and LETd maps. Constraints were placed to maintain the target dose to within 25% of the prescription, while maximizing the BD. The plan optimization and calculation of dose and LETd maps were performed on a GPU cluster. The conventional IMPT and biologically-optimized plans were compared. Results: The mean target physical and biological doses from our biologically-optimized plan were, respectively, 5% and 14% higher than those from the MC re-calculation of the IMPT plan. Dose sparing to critical structures in our plan was also improved. The biological optimization, including the initial dose and LETd map calculations, can be completed in a clinically viable time (∼30 minutes) on a cluster of 25 GPUs. Conclusion: Taking advantage of GPU acceleration, we created a MC-based, biologically optimized treatment plan for a thyroid patient. Compared to a standard IMPT plan, a 5% increase in the target’s physical dose resulted in ∼3 times as much increase in the BD. Biological planning was thus effective in escalating the target BD.« less
Exploring quantum computing application to satellite data assimilation
NASA Astrophysics Data System (ADS)
Cheung, S.; Zhang, S. Q.
2015-12-01
This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.
Landsat Time-Series Analysis Opens New Approaches for Regional Glacier Mapping
NASA Astrophysics Data System (ADS)
Winsvold, S. H.; Kääb, A.; Nuth, C.; Altena, B.
2016-12-01
The archive of Landsat satellite scenes is important for mapping of glaciers, especially as it represents the longest running and continuous satellite record of sufficient resolution to track glacier changes over time. Contributing optical sensors newly launched (Landsat 8 and Sentinel-2A) or upcoming in the near future (Sentinel-2B), will promote very high temporal resolution of optical satellite images especially in high-latitude regions. Because of the potential that lies within such near-future dense time series, methods for mapping glaciers from space should be revisited. We present application scenarios that utilize and explore dense time series of optical data for automatic mapping of glacier outlines and glacier facies. Throughout the season, glaciers display a temporal sequence of properties in optical reflection as the seasonal snow melts away, and glacier ice appears in the ablation area and firn in the accumulation area. In one application scenario presented we simulated potential future seasonal resolution using several years of Landsat 5TM/7ETM+ data, and found a sinusoidal evolution of the spectral reflectance for on-glacier pixels throughout a year. We believe this is because of the short wave infrared band and its sensitivity to snow grain size. The parameters retrieved from the fitting sinus curve can be used for glacier mapping purposes, thus we also found similar results using e.g. the mean of summer band ratio images. In individual optical mapping scenes, conditions will vary (e.g., snow, ice, and clouds) and will not be equally optimal over the entire scene. Using robust statistics on stacked pixels reveals a potential for synthesizing optimal mapping scenes from a temporal stack, as we present in a further application scenario. The dense time series available from satellite imagery will also promote multi-temporal and multi-sensor based analyses. The seasonal pattern of snow and ice on a glacier seen in the optical time series can in the summer season also be observed using radar backscatter series. Optical sensors reveal the reflective properties at the surface, while radar sensors may penetrate the surface revealing properties from a certain volume.In an outlook to this contribution we have explored how we can combine information from SAR and optical sensor systems for different purposes.
Valdés, Julio J; Barton, Alan J
2007-05-01
A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.
Analysis on the application of background parameters on remote sensing classification
NASA Astrophysics Data System (ADS)
Qiao, Y.
Drawing accurate crop cultivation acreage, dynamic monitoring of crops growing and yield forecast are some important applications of remote sensing to agriculture. During the 8th 5-Year Plan period, the task of yield estimation using remote sensing technology for the main crops in major production regions in China once was a subtopic to the national research task titled "Study on Application of Remote sensing Technology". In 21 century in a movement launched by Chinese Ministry of Agriculture to combine high technology to farming production, remote sensing has given full play to farm crops' growth monitoring and yield forecast. And later in 2001 Chinese Ministry of Agriculture entrusted the Northern China Center of Agricultural Remote Sensing to forecast yield of some main crops like wheat, maize and rice in rather short time to supply information for the government decision maker. Present paper is a report for this task. It describes the application of background parameters in image recognition, classification and mapping with focuses on plan of the geo-science's theory, ecological feature and its cartographical objects or scale, the study of phrenology for image optimal time for classification of the ground objects, the analysis of optimal waveband composition and the application of background data base to spatial information recognition ;The research based on the knowledge of background parameters is indispensable for improving the accuracy of image classification and mapping quality and won a secondary reward of tech-science achievement from Chinese Ministry of Agriculture. Keywords: Spatial image; Classification; Background parameter
Accurate nonlinear mapping between MNI volumetric and FreeSurfer surface coordinate systems.
Wu, Jianxiao; Ngo, Gia H; Greve, Douglas; Li, Jingwei; He, Tong; Fischl, Bruce; Eickhoff, Simon B; Yeo, B T Thomas
2018-05-16
The results of most neuroimaging studies are reported in volumetric (e.g., MNI152) or surface (e.g., fsaverage) coordinate systems. Accurate mappings between volumetric and surface coordinate systems can facilitate many applications, such as projecting fMRI group analyses from MNI152/Colin27 to fsaverage for visualization or projecting resting-state fMRI parcellations from fsaverage to MNI152/Colin27 for volumetric analysis of new data. However, there has been surprisingly little research on this topic. Here, we evaluated three approaches for mapping data between MNI152/Colin27 and fsaverage coordinate systems by simulating the above applications: projection of group-average data from MNI152/Colin27 to fsaverage and projection of fsaverage parcellations to MNI152/Colin27. Two of the approaches are currently widely used. A third approach (registration fusion) was previously proposed, but not widely adopted. Two implementations of the registration fusion (RF) approach were considered, with one implementation utilizing the Advanced Normalization Tools (ANTs). We found that RF-ANTs performed the best for mapping between fsaverage and MNI152/Colin27, even for new subjects registered to MNI152/Colin27 using a different software tool (FSL FNIRT). This suggests that RF-ANTs would be useful even for researchers not using ANTs. Finally, it is worth emphasizing that the most optimal approach for mapping data to a coordinate system (e.g., fsaverage) is to register individual subjects directly to the coordinate system, rather than via another coordinate system. Only in scenarios where the optimal approach is not possible (e.g., mapping previously published results from MNI152 to fsaverage), should the approaches evaluated in this manuscript be considered. In these scenarios, we recommend RF-ANTs (https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/registration/Wu2017_RegistrationFusion). © 2018 Wiley Periodicals, Inc.
A case study of tuning MapReduce for efficient Bioinformatics in the cloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Lizhen; Wang, Zhong; Yu, Weikuan
The combination of the Hadoop MapReduce programming model and cloud computing allows biological scientists to analyze next-generation sequencing (NGS) data in a timely and cost-effective manner. Cloud computing platforms remove the burden of IT facility procurement and management from end users and provide ease of access to Hadoop clusters. However, biological scientists are still expected to choose appropriate Hadoop parameters for running their jobs. More importantly, the available Hadoop tuning guidelines are either obsolete or too general to capture the particular characteristics of bioinformatics applications. In this paper, we aim to minimize the cloud computing cost spent on bioinformatics datamore » analysis by optimizing the extracted significant Hadoop parameters. When using MapReduce-based bioinformatics tools in the cloud, the default settings often lead to resource underutilization and wasteful expenses. We choose k-mer counting, a representative application used in a large number of NGS data analysis tools, as our study case. Experimental results show that, with the fine-tuned parameters, we achieve a total of 4× speedup compared with the original performance (using the default settings). Finally, this paper presents an exemplary case for tuning MapReduce-based bioinformatics applications in the cloud, and documents the key parameters that could lead to significant performance benefits.« less
Floating-Point Modules Targeted for Use with RC Compilation Tools
NASA Technical Reports Server (NTRS)
Sahin, Ibrahin; Gloster, Clay S.
2000-01-01
Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.
Overview and example application of the Landscape Treatment Designer
Alan A. Ager; Nicole M. Vaillant; David E. Owens; Stuart Brittain; Jeff Hamann
2012-01-01
The Landscape Treatment Designer (LTD) is a multicriteria spatial prioritization and optimization system to help design and explore landscape fuel treatment scenarios. The program fills a gap between fire model programs such as FlamMap, and planning systems such as ArcFuels, in the fuel treatment planning process. The LTD uses inputs on spatial treatment objectives,...
NASA Technical Reports Server (NTRS)
Geering, H. P.; Athans, M.
1973-01-01
A complete theory of necessary and sufficient conditions is discussed for a control to be superior with respect to a nonscalar-valued performance criterion. The latter maps into a finite dimensional, integrally closed directed, partially ordered linear space. The applicability of the theory to the analysis of dynamic vector estimation problems and to a class of uncertain optimal control problems is demonstrated.
Digi Island: A Serious Game for Teaching and Learning Digital Circuit Optimization
NASA Technical Reports Server (NTRS)
Harper, Michael; Miller, Joseph; Shen, Yuzhong
2011-01-01
Karnaugh maps, also known as K-maps, are a tool used to optimize or simplify digital logic circuits. A K-map is a graphical display of a logic circuit. K-map optimization is essentially the process of finding a minimum number of maximal aggregations of K-map cells. with values of 1 according to a set of rules. The Digi Island is a serious game designed for aiding students to learn K-map optimization. The game takes place on an exotic island (called Digi Island) in the Pacific Ocean . The player is an adventurer to the Digi Island and will transform it into a tourist attraction by developing real estates, such as amusement parks.and hotels. The Digi Island game elegantly converts boring 1s and Os in digital circuits into usable and unusable spaces on a beautiful island and transforms K-map optimization into real estate development, an activity with which many students are familiar and also interested in. This paper discusses the design, development, and some preliminary results of the Digi Island game.
Combined Exact-Repeat and Geodetic Mission Altimetry for High-Resolution Empirical Tide Mapping
NASA Astrophysics Data System (ADS)
Zaron, E. D.
2014-12-01
The configuration of present and historical exact-repeat mission (ERM) altimeter ground tracks determines the maximum resolution of empirical tidal maps obtained with ERM data. Although the mode-1 baroclinic tide is resolvable at mid-latitudes in the open ocean, the ability to detect baroclinic and barotropic tides near islands and complex coastlines is limited, in part, by ERM track density. In order to obtain higher resolution maps, the possibility of combining ERM and geodetic mission (GM) altimetry is considered, using a combination of spatial thin-plate splines and temporal harmonic analysis. Given the present spatial and temporal distribution of GM missions, it is found that GM data can contribute to resolving tidal features smaller than 75 km, provided the signal amplitude is greater than about 1 cm. Uncertainties in the mean sea surface and environmental corrections are significant components of the GM error budget, and methods to optimize data selection and along-track filtering are still being optimized. Application to two regions, Monterey Bay and Luzon Strait, finds evidence for complex tidal fields in agreement with independent observations and modeling studies.
NASA Technical Reports Server (NTRS)
Bodechtel, J.; Nithack, J.; Dibernardo, G.; Hiller, K.; Jaskolla, F.; Smolka, A.
1975-01-01
Utilizing LANDSAT and Skylab multispectral imagery of 1972 and 1973, a land use map of the mountainous regions of Italy was evaluated at a scale of 1:250,000. Seven level I categories were identified by conventional methods of photointerpretation. Images of multispectral scanner (MSS) bands 5 and 7, or equivalents were mainly used. Areas of less than 200 by 200 m were classified and standard procedures were established for interpretation of multispectral satellite imagery. Land use maps were produced for central and southern Europe indicating that the existing land use maps could be updated and optimized. The complexity of European land use patterns, the intensive morphology of young mountain ranges, and time-cost calculations are the reasons that the applied conventional techniques are superior to automatic evaluation.
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
Adaptive treatment-length optimization in spatiobiologically integrated radiotherapy
NASA Astrophysics Data System (ADS)
Ajdari, Ali; Ghate, Archis; Kim, Minsun
2018-04-01
Recent theoretical research on spatiobiologically integrated radiotherapy has focused on optimization models that adapt fluence-maps to the evolution of tumor state, for example, cell densities, as observed in quantitative functional images acquired over the treatment course. We propose an optimization model that adapts the length of the treatment course as well as the fluence-maps to such imaged tumor state. Specifically, after observing the tumor cell densities at the beginning of a session, the treatment planner solves a group of convex optimization problems to determine an optimal number of remaining treatment sessions, and a corresponding optimal fluence-map for each of these sessions. The objective is to minimize the total number of tumor cells remaining (TNTCR) at the end of this proposed treatment course, subject to upper limits on the biologically effective dose delivered to the organs-at-risk. This fluence-map is administered in future sessions until the next image is available, and then the number of sessions and the fluence-map are re-optimized based on the latest cell density information. We demonstrate via computer simulations on five head-and-neck test cases that such adaptive treatment-length and fluence-map planning reduces the TNTCR and increases the biological effect on the tumor while employing shorter treatment courses, as compared to only adapting fluence-maps and using a pre-determined treatment course length based on one-size-fits-all guidelines.
NASA Astrophysics Data System (ADS)
Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang
2016-05-01
In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.
Global Coverage Measurement Planning Strategies for Mobile Robots Equipped with a Remote Gas Sensor
Arain, Muhammad Asif; Trincavelli, Marco; Cirillo, Marcello; Schaffernicht, Erik; Lilienthal, Achim J.
2015-01-01
The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions. PMID:25803707
Global coverage measurement planning strategies for mobile robots equipped with a remote gas sensor.
Arain, Muhammad Asif; Trincavelli, Marco; Cirillo, Marcello; Schaffernicht, Erik; Lilienthal, Achim J
2015-03-20
The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. In this paper, we address the problem of gas detection in large areas with a mobile robotic platform equipped with a remote gas sensor. We propose an algorithm that leverages a novel method based on convex relaxation for quickly solving sensor placement problems, and for generating an efficient exploration plan for the robot. To demonstrate the applicability of our method to real-world environments, we performed a large number of experimental trials, both on randomly generated maps and on the map of a real environment. Our approach proves to be highly efficient in terms of computational requirements and to provide nearly-optimal solutions.
Remote sensing of Qatar nearshore habitats with perspectives for coastal management.
Warren, Christopher; Dupont, Jennifer; Abdel-Moati, Mohamed; Hobeichi, Sanaa; Palandro, David; Purkis, Sam
2016-04-30
A framework is proposed for utilizing remote sensing and ground-truthing field data to map benthic habitats in the State of Qatar, with potential application across the Arabian Gulf. Ideally the methodology can be applied to optimize the efficiency and effectiveness of mapping the nearshore environment to identify sensitive habitats, monitor for change, and assist in management decisions. The framework is applied to a case study for northeastern Qatar with a key focus on identifying high sensitivity coral habitat. The study helps confirm the presence of known coral and provides detail on a region in the area of interest where corals have not been previously mapped. Challenges for the remote sensing methodology associated with natural heterogeneity of the physical and biological environment are addressed. Recommendations on the application of this approach to coastal environmental risk assessment and management planning are discussed as well as future opportunities for improvement of the framework. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Besson, Pierre; Dominguez, Cesar; Voarino, Philippe; Garcia-Linares, Pablo; Weick, Clement; Lemiti, Mustapha; Baudrit, Mathieu
2015-09-01
The optical characterization and electrical performance evaluation are essential in the design and optimization of a concentrator photovoltaic system. The geometry, materials, and size of concentrator optics are diverse and different environmental conditions impact their performance. CEA has developed a new concentrator photovoltaic system characterization bench, METHOD, which enables multi-physics optimization studies. The lens and cell temperatures are controlled independently with the METHOD to study their isolated effects on the electrical and optical performance of the system. These influences can be studied in terms of their effect on optical efficiency, focal distance, spectral sensitivity, electrical efficiency, or cell current matching. Furthermore, the irradiance map of a concentrator optic can be mapped to study its variations versus the focal length or the lens temperature. The present work shows this application to analyze the performance of a Fresnel lens linking temperature to optical and electrical performance.
Parameterization of hyperpolarized (13)C-bicarbonate-dissolution dynamic nuclear polarization.
Scholz, David Johannes; Otto, Angela M; Hintermair, Josef; Schilling, Franz; Frank, Annette; Köllisch, Ulrich; Janich, Martin A; Schulte, Rolf F; Schwaiger, Markus; Haase, Axel; Menzel, Marion I
2015-12-01
(13)C metabolic MRI using hyperpolarized (13)C-bicarbonate enables preclinical detection of pH. To improve signal-to-noise ratio, experimental procedures were refined, and the influence of pH, buffer capacity, temperature, and field strength were investigated. Bicarbonate preparation was investigated. Bicarbonate was prepared and applied in spectroscopy at 1, 3, 14 T using pure dissolution, culture medium, and MCF-7 cell spheroids. Healthy rats were imaged by spectral-spatial spiral acquisition for spatial and temporal bicarbonate distribution, pH mapping, and signal decay analysis. An optimized preparation technique for maximum solubility of 6 mol/L and polarization levels of 19-21% is presented; T1 and SNR dependency on field strength, buffer capacity, and pH was investigated. pH mapping in vivo is demonstrated. An optimized bicarbonate preparation and experimental procedure provided improved T1 and SNR values, allowing in vitro and in vivo applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, D; Li, K; Lubner, M G
Purpose: The introduction of the highly nonlinear MBIR algorithm to clinical CT systems has made CNR an invalid metric for kV optimization. The purpose of this work was to develop a task-based framework to unify kV and mAs optimization for both FBP- and MBIR-based CT systems. Methods: The kV-mAs optimization was formulated as a constrained minimization problem: to select kV and mAs to minimize dose under the constraint of maintaining the detection performance as clinically prescribed. To experimentally solve this optimization problem, exhaustive measurements of detectability index (d’) for a hepatic lesion detection task were performed at 15 different mAmore » levels and 4 kV levels using an anthropomorphic phantom. The measured d’ values were used to generate an iso-detectability map; similarly, dose levels recorded at different kV-mAs combinations were used to generate an iso-dose map. The iso-detectability map was overlaid on top of the iso-dose map so that for a prescribed detectability level d’, the optimal kV-mA can be determined from the crossing between the d’ contour and the dose contour that corresponds to the minimum dose. Results: Taking d’=16 as an example: the kV-mAs combinations on the measured iso-d’ line of MBIR are 80–150 (3.8), 100–140 (6.6), 120–150 (11.3), and 140–160 (17.2), where values in the parentheses are measured dose values. As a Result, the optimal kV was 80 and optimal mA was 150. In comparison, the optimal kV and mA for FBP were 100 and 500, which corresponded to a dose level of 24 mGy. Results of in vivo animal experiments were consistent with the phantom results. Conclusion: A new method to optimize kV and mAs selection has been developed. This method is applicable to both linear and nonlinear CT systems such as those using MBIR. Additional dose savings can be achieved by combining MBIR with this method. This work was partially supported by an NIH grant R01CA169331 and GE Healthcare. K. Li, D. Gomez-Cardona, M. G. Lubner: Nothing to disclose. P. J. Pickhardt: Co-founder, VirtuoCTC, LLC Stockholder, Cellectar Biosciences, Inc. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less
Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham
2015-01-01
Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943
Okumura, Yasuo; Johnson, Susan B; Bunch, T Jared; Henz, Benhur D; O'Brien, Christine J; Packer, Douglas L
2008-06-01
While catheter tip/tissue contact has been shown to be an important determinant of ablative lesions in in vitro studies, the impact of contact on the outcomes of mapping and ablation in the intact heart has not been evaluated. Twelve dogs underwent atrial ablation guided by the Senesitrade mark robotic catheter remote control system. After intracardiac ultrasound (ICE) validation of contact force measured by an in-line mechanical sensor, the relationship between contact force and individual lesion formation was established during irrigated-tipped ablation (flow 17 mL/sec) at 15 watts for 30 seconds. Minimal contact by ICE correlated with force of 4.7 +/- 5.8 grams, consistent contact 9.9 +/- 8.6 grams and tissue tenting produced 25.0 +/- 14.0 grams. Conversely, catheter tip/tissue contact by ICE was predicted by contact force. A contact force of 10-20 and > or =20 grams generated full-thickness, larger volume ablative lesions than that created with <10 grams (98 +/- 69 and 89 +/- 70 mm(3) vs 40 +/- 42 mm(3), P < 0.05). Moderate (10 grams) and marked contact (15-20 grams) application produced 1.5 X greater electroanatomic map volumes that were seen with minimal contact (5 grams) (26 +/- 3 cm(3) vs 33 +/- 6, 39 +/- 3 cm(3), P < 0.05). The electroanatomic map/CT merge process was also more distorted when mapping was generated at moderate to marked contact force. This study shows that mapping and ablation using a robotic sheath guidance system are critically dependent on generated force. These findings suggest that ablative lesion size is optimized by the application of 10-20 grams of contact force, although mapping requires lower-force application to avoid image distortions.
Perspective: Stochastic magnetic devices for cognitive computing
NASA Astrophysics Data System (ADS)
Roy, Kaushik; Sengupta, Abhronil; Shim, Yong
2018-06-01
Stochastic switching of nanomagnets can potentially enable probabilistic cognitive hardware consisting of noisy neural and synaptic components. Furthermore, computational paradigms inspired from the Ising computing model require stochasticity for achieving near-optimality in solutions to various types of combinatorial optimization problems such as the Graph Coloring Problem or the Travelling Salesman Problem. Achieving optimal solutions in such problems are computationally exhaustive and requires natural annealing to arrive at the near-optimal solutions. Stochastic switching of devices also finds use in applications involving Deep Belief Networks and Bayesian Inference. In this article, we provide a multi-disciplinary perspective across the stack of devices, circuits, and algorithms to illustrate how the stochastic switching dynamics of spintronic devices in the presence of thermal noise can provide a direct mapping to the computational units of such probabilistic intelligent systems.
Comparison of Image Restoration Methods for Lunar Epithermal Neutron Emission Mapping
NASA Technical Reports Server (NTRS)
McClanahan, T. P.; Ivatury, V.; Milikh, G.; Nandikotkur, G.; Puetter, R. C.; Sagdeev, R. Z.; Usikov, D.; Mitrofanov, I. G.
2009-01-01
Orbital measurements of neutrons by the Lunar Exploring Neutron Detector (LEND) onboard the Lunar Reconnaissance Orbiter are being used to quantify the spatial distribution of near surface hydrogen (H). Inferred H concentration maps have low signal-to-noise (SN) and image restoration (IR) techniques are being studied to enhance results. A single-blind. two-phase study is described in which four teams of researchers independently developed image restoration techniques optimized for LEND data. Synthetic lunar epithermal neutron emission maps were derived from LEND simulations. These data were used as ground truth to determine the relative quantitative performance of the IR methods vs. a default denoising (smoothing) technique. We review and used factors influencing orbital remote sensing of neutrons emitted from the lunar surface to develop a database of synthetic "true" maps for performance evaluation. A prior independent training phase was implemented for each technique to assure methods were optimized before the blind trial. Method performance was determined using several regional root-mean-square error metrics specific to epithermal signals of interest. Results indicate unbiased IR methods realize only small signal gains in most of the tested metrics. This suggests other physically based modeling assumptions are required to produce appreciable signal gains in similar low SN IR applications.
Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan
2017-01-01
In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.
Application of preconditioned alternating direction method of multipliers in depth from focal stack
NASA Astrophysics Data System (ADS)
Javidnia, Hossein; Corcoran, Peter
2018-03-01
Postcapture refocusing effect in smartphone cameras is achievable using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map, which has been an open issue for decades. To tackle this issue, a framework is proposed based on a preconditioned alternating direction method of multipliers for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy, the optimization function of the proposed framework can, in fact, converge faster and better than state-of-the-art methods. The qualitative evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against five other methods. Later, 10 light field image sets have been transformed into focal stacks for quantitative evaluation purposes. Preliminary results indicate that the proposed framework has a better performance in terms of structural accuracy and optimization in comparison to the current state-of-the-art methods.
Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.
Automatic metro map layout using multicriteria optimization.
Stott, Jonathan; Rodgers, Peter; Martínez-Ovando, Juan Carlos; Walker, Stephen G
2011-01-01
This paper describes an automatic mechanism for drawing metro maps. We apply multicriteria optimization to find effective placement of stations with a good line layout and to label the map unambiguously. A number of metrics are defined, which are used in a weighted sum to find a fitness value for a layout of the map. A hill climbing optimizer is used to reduce the fitness value, and find improved map layouts. To avoid local minima, we apply clustering techniques to the map-the hill climber moves both stations and clusters when finding improved layouts. We show the method applied to a number of metro maps, and describe an empirical study that provides some quantitative evidence that automatically-drawn metro maps can help users to find routes more efficiently than either published maps or undistorted maps. Moreover, we have found that, in these cases, study subjects indicate a preference for automatically-drawn maps over the alternatives. © 2011 IEEE Published by the IEEE Computer Society
Spectral edge: gradient-preserving spectral mapping for image fusion.
Connah, David; Drew, Mark S; Finlayson, Graham D
2015-12-01
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.
2012-01-01
Background There are numerous applications for Health Information Systems (HIS) that support specific tasks in the clinical workflow. The Lean method has been used increasingly to optimize clinical workflows, by removing waste and shortening the delivery cycle time. There are a limited number of studies on Lean applications related to HIS. Therefore, we applied the Lean method to evaluate the clinical processes related to HIS, in order to evaluate its efficiency in removing waste and optimizing the process flow. This paper presents the evaluation findings of these clinical processes, with regards to a critical care information system (CCIS), known as IntelliVue Clinical Information Portfolio (ICIP), and recommends solutions to the problems that were identified during the study. Methods We conducted a case study under actual clinical settings, to investigate how the Lean method can be used to improve the clinical process. We used observations, interviews, and document analysis, to achieve our stated goal. We also applied two tools from the Lean methodology, namely the Value Stream Mapping and the A3 problem-solving tools. We used eVSM software to plot the Value Stream Map and A3 reports. Results We identified a number of problems related to inefficiency and waste in the clinical process, and proposed an improved process model. Conclusions The case study findings show that the Value Stream Mapping and the A3 reports can be used as tools to identify waste and integrate the process steps more efficiently. We also proposed a standardized and improved clinical process model and suggested an integrated information system that combines database and software applications to reduce waste and data redundancy. PMID:23259846
Yusof, Maryati Mohd; Khodambashi, Soudabeh; Mokhtar, Ariffin Marzuki
2012-12-21
There are numerous applications for Health Information Systems (HIS) that support specific tasks in the clinical workflow. The Lean method has been used increasingly to optimize clinical workflows, by removing waste and shortening the delivery cycle time. There are a limited number of studies on Lean applications related to HIS. Therefore, we applied the Lean method to evaluate the clinical processes related to HIS, in order to evaluate its efficiency in removing waste and optimizing the process flow. This paper presents the evaluation findings of these clinical processes, with regards to a critical care information system (CCIS), known as IntelliVue Clinical Information Portfolio (ICIP), and recommends solutions to the problems that were identified during the study. We conducted a case study under actual clinical settings, to investigate how the Lean method can be used to improve the clinical process. We used observations, interviews, and document analysis, to achieve our stated goal. We also applied two tools from the Lean methodology, namely the Value Stream Mapping and the A3 problem-solving tools. We used eVSM software to plot the Value Stream Map and A3 reports. We identified a number of problems related to inefficiency and waste in the clinical process, and proposed an improved process model. The case study findings show that the Value Stream Mapping and the A3 reports can be used as tools to identify waste and integrate the process steps more efficiently. We also proposed a standardized and improved clinical process model and suggested an integrated information system that combines database and software applications to reduce waste and data redundancy.
Optimal guidance with obstacle avoidance for nap-of-the-earth flight
NASA Technical Reports Server (NTRS)
Pekelsma, Nicholas J.
1988-01-01
The development of automatic guidance is discussed for helicopter Nap-of-the-Earth (NOE) and near-NOE flight. It deals with algorithm refinements relating to automated real-time flight path planning and to mission planning. With regard to path planning, it relates rotorcraft trajectory characteristics to the NOE computation scheme and addresses real-time computing issues and both ride quality issues and pilot-vehicle interfaces. The automated mission planning algorithm refinements include route optimization, automatic waypoint generation, interactive applications, and provisions for integrating the results into the real-time path planning software. A microcomputer based mission planning workstation was developed and is described. Further, the application of Defense Mapping Agency (DMA) digital terrain to both the mission planning workstation and to automatic guidance is both discussed and illustrated.
Depth image super-resolution via semi self-taught learning framework
NASA Astrophysics Data System (ADS)
Zhao, Furong; Cao, Zhiguo; Xiao, Yang; Zhang, Xiaodi; Xian, Ke; Li, Ruibo
2017-06-01
Depth images have recently attracted much attention in computer vision and high-quality 3D content for 3DTV and 3D movies. In this paper, we present a new semi self-taught learning application framework for enhancing resolution of depth maps without making use of ancillary color images data at the target resolution, or multiple aligned depth maps. Our framework consists of cascade random forests reaching from coarse to fine results. We learn the surface information and structure transformations both from a small high-quality depth exemplars and the input depth map itself across different scales. Considering that edge plays an important role in depth map quality, we optimize an effective regularized objective that calculates on output image space and input edge space in random forests. Experiments show the effectiveness and superiority of our method against other techniques with or without applying aligned RGB information
NASA Astrophysics Data System (ADS)
Beitone, C.; Balandraud, X.; Delpueyo, D.; Grédiac, M.
2017-01-01
This paper presents a post-processing technique for noisy temperature maps based on a gradient anisotropic diffusion (GAD) filter in the context of heat source reconstruction. The aim is to reconstruct heat source maps from temperature maps measured using infrared (IR) thermography. Synthetic temperature fields corrupted by added noise are first considered. The GAD filter, which relies on a diffusion process, is optimized to retrieve as well as possible a heat source concentration in a two-dimensional plate. The influence of the dimensions and the intensity of the heat source concentration are discussed. The results obtained are also compared with two other types of filters: averaging filter and Gaussian derivative filter. The second part of this study presents an application for experimental temperature maps measured with an IR camera. The results demonstrate the relevancy of the GAD filter in extracting heat sources from noisy temperature fields.
Subsite mapping of enzymes. Depolymerase computer modelling.
Allen, J D; Thoma, J A
1976-01-01
We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629
Optimal contact definition for reconstruction of contact maps.
Duarte, Jose M; Sathyapriya, Rajagopal; Stehr, Henning; Filippis, Ioannis; Lappe, Michael
2010-05-27
Contact maps have been extensively used as a simplified representation of protein structures. They capture most important features of a protein's fold, being preferred by a number of researchers for the description and study of protein structures. Inspired by the model's simplicity many groups have dedicated a considerable amount of effort towards contact prediction as a proxy for protein structure prediction. However a contact map's biological interest is subject to the availability of reliable methods for the 3-dimensional reconstruction of the structure. We use an implementation of the well-known distance geometry protocol to build realistic protein 3-dimensional models from contact maps, performing an extensive exploration of many of the parameters involved in the reconstruction process. We try to address the questions: a) to what accuracy does a contact map represent its corresponding 3D structure, b) what is the best contact map representation with regard to reconstructability and c) what is the effect of partial or inaccurate contact information on the 3D structure recovery. Our results suggest that contact maps derived from the application of a distance cutoff of 9 to 11A around the Cbeta atoms constitute the most accurate representation of the 3D structure. The reconstruction process does not provide a single solution to the problem but rather an ensemble of conformations that are within 2A RMSD of the crystal structure and with lower values for the pairwise average ensemble RMSD. Interestingly it is still possible to recover a structure with partial contact information, although wrong contacts can lead to dramatic loss in reconstruction fidelity. Thus contact maps represent a valid approximation to the structures with an accuracy comparable to that of experimental methods. The optimal contact definitions constitute key guidelines for methods based on contact maps such as structure prediction through contacts and structural alignments based on maximum contact map overlap.
Optimal contact definition for reconstruction of Contact Maps
2010-01-01
Background Contact maps have been extensively used as a simplified representation of protein structures. They capture most important features of a protein's fold, being preferred by a number of researchers for the description and study of protein structures. Inspired by the model's simplicity many groups have dedicated a considerable amount of effort towards contact prediction as a proxy for protein structure prediction. However a contact map's biological interest is subject to the availability of reliable methods for the 3-dimensional reconstruction of the structure. Results We use an implementation of the well-known distance geometry protocol to build realistic protein 3-dimensional models from contact maps, performing an extensive exploration of many of the parameters involved in the reconstruction process. We try to address the questions: a) to what accuracy does a contact map represent its corresponding 3D structure, b) what is the best contact map representation with regard to reconstructability and c) what is the effect of partial or inaccurate contact information on the 3D structure recovery. Our results suggest that contact maps derived from the application of a distance cutoff of 9 to 11Å around the Cβ atoms constitute the most accurate representation of the 3D structure. The reconstruction process does not provide a single solution to the problem but rather an ensemble of conformations that are within 2Å RMSD of the crystal structure and with lower values for the pairwise average ensemble RMSD. Interestingly it is still possible to recover a structure with partial contact information, although wrong contacts can lead to dramatic loss in reconstruction fidelity. Conclusions Thus contact maps represent a valid approximation to the structures with an accuracy comparable to that of experimental methods. The optimal contact definitions constitute key guidelines for methods based on contact maps such as structure prediction through contacts and structural alignments based on maximum contact map overlap. PMID:20507547
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mediavilla, E.; Lopez, P.; Mediavilla, T.
2011-11-01
We derive an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps. It is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells (inverse polygon mapping, IPM), not including critical points (except perhaps at the cell boundaries). The zeroth-order term of the series expansion leads to the method described by Mediavilla et al. The first-order term is used to study the error induced by the truncation of the series at zeroth order, explaining the high accuracy of the IPM even at this low order of approximation.more » Interpreting the Inverse Ray Shooting (IRS) method in terms of IPM, we explain the previously reported N {sup -3/4} dependence of the IRS error with the number of collected rays per pixel. Cells intersected by critical curves (critical cells) transform to non-simply connected regions with topological pathologies like auto-overlapping or non-preservation of the boundary under the transformation. To define a non-critical partition, we use a linear approximation of the critical curve to divide each critical cell into two non-critical subcells. The optimal choice of the cell size depends basically on the curvature of the critical curves. For typical applications in which the pixel of the magnification map is a small fraction of the Einstein radius, a one-to-one relationship between the cell and pixel sizes in the absence of lensing guarantees both the consistence of the method and a very high accuracy. This prescription is simple but very conservative. We show that substantially larger cells can be used to obtain magnification maps with huge savings in computation time.« less
WE-AB-209-10: Optimizing the Delivery of Sequential Fluence Maps for Efficient VMAT Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, D; Balvert, M
2016-06-15
Purpose: To develop an optimization model and solution approach for computing MLC leaf trajectories and dose rates for high quality matching of a set of optimized fluence maps to be delivered sequentially around a patient in a VMAT treatment. Methods: We formulate the fluence map matching problem as a nonlinear optimization problem where time is discretized but dose rates and leaf positions are continuous variables. For a given allotted time, which is allocated across the fluence maps based on the complexity of each fluence map, the optimization problem searches for the best leaf trajectories and dose rates such that themore » original fluence maps are closely recreated. Constraints include maximum leaf speed, maximum dose rate, and leaf collision avoidance, as well as the constraint that the ending leaf positions for one map are the starting leaf positions for the next map. The resulting model is non-convex but smooth, and therefore we solve it by local searches from a variety of starting positions. We improve solution time by a custom decomposition approach which allows us to decouple the rows of the fluence maps and solve each leaf pair individually. This decomposition also makes the problem easily parallelized. Results: We demonstrate method on a prostate case and a head-and-neck case and show that one can recreate fluence maps to high degree of fidelity in modest total delivery time (minutes). Conclusion: We present a VMAT sequencing method that reproduces optimal fluence maps by searching over a vast number of possible leaf trajectories. By varying the total allotted time given, this approach is the first of its kind to allow users to produce VMAT solutions that span the range of wide-field coarse VMAT deliveries to narrow-field high-MU sliding window-like approaches.« less
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling
Cuperlovic-Culf, Miroslava
2018-01-01
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.
Cuperlovic-Culf, Miroslava
2018-01-11
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Mapping of cavitational activity in a pilot plant dyeing equipment.
Actis Grande, G; Giansetti, M; Pezzin, A; Rovero, G; Sicardi, S
2015-11-01
A large number of papers of the literature quote dyeing intensification based on the application of ultrasound (US) in the dyeing liquor. Mass transfer mechanisms are described and quantified, nevertheless these experimental results in general refer to small laboratory apparatuses with a capacity of a few hundred millilitres and extremely high volumetric energy intensity. With the strategy of overcoming the scale-up inaccuracy consequent to the technological application of ultrasounds, a dyeing pilot-plant prototype of suitable liquor capacity (about 40 L) and properly simulating several liquor to textile hydraulic relationships was designed by including US transducers with different geometries. Optimal dyeing may be obtained by optimising the distance between transducer and textile material, the liquid height being a non-negligible operating parameter. Hence, mapping the cavitation energy in the machinery is expected to provide basic data on the intensity and distribution of the ultrasonic field in the aqueous liquor. A flat ultrasonic transducer (absorbed electrical power of 600 W), equipped with eight devices emitting at 25 kHz, was mounted horizontally at the equipment bottom. Considering industrial scale dyeing, liquor and textile substrate are reciprocally displaced to achieve a uniform colouration. In this technology a non uniform US field could affect the dyeing evenness to a large extent; hence, mapping the cavitation energy distribution in the machinery is expected to provide fundamental data and define optimal operating conditions. Local values of the cavitation intensity were recorded by using a carefully calibrated Ultrasonic Energy Meter, which is able to measure the power per unit surface generated by the cavitation implosion of bubbles. More than 200 measurements were recorded to define the map at each horizontal plane positioned at a different distance from the US transducer; tap water was heated at the same temperature used for dyeing tests (60°C). Different liquid flow rates were tested to investigate the effect of the hydrodynamics characterising the equipment. The mapping of the cavitation intensity in the pilot-plant machinery was performed to achieve with the following goals: (a) to evaluate the influence of turbulence on the cavitation intensity, and (b) to determine the optimal distance from the ultrasound device at which a fabric should be positioned, this parameter being a compromise between the cavitation intensity (higher next to the transducer) and the US field uniformity (achieved at some distance from this device). By carrying out dyeing tests of wool fabrics in the prototype unit, consistent results were confirmed by comparison with the mapping of cavitation intensity. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shimoyama, Koji; Jeong, Shinkyu; Obayashi, Shigeru
A new approach for multi-objective robust design optimization was proposed and applied to a real-world design problem with a large number of objective functions. The present approach is assisted by response surface approximation and visual data-mining, and resulted in two major gains regarding computational time and data interpretation. The Kriging model for response surface approximation can markedly reduce the computational time for predictions of robustness. In addition, the use of self-organizing maps as a data-mining technique allows visualization of complicated design information between optimality and robustness in a comprehensible two-dimensional form. Therefore, the extraction and interpretation of trade-off relations between optimality and robustness of design, and also the location of sweet spots in the design space, can be performed in a comprehensive manner.
Map synchronization in optical communication systems
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.; Mohanty, N.
1973-01-01
The time synchronization problem in an optical communication system is approached as a problem of estimating the arrival time (delay variable) of a known transmitted field. Maximum aposteriori (MAP) estimation procedures are used to generate optimal estimators, with emphasis placed on their interpretation as a practical system device, Estimation variances are used to aid in the design of the transmitter signals for best synchronization. Extension is made to systems that perform separate acquisition and tracking operations during synchronization. The closely allied problem of maintaining timing during pulse position modulation is also considered. The results have obvious application to optical radar and ranging systems, as well as the time synchronization problem.
Experimental realization of a highly secure chaos communication under strong channel noise
NASA Astrophysics Data System (ADS)
Ye, Weiping; Dai, Qionglin; Wang, Shihong; Lu, Huaping; Kuang, Jinyu; Zhao, Zhenfeng; Zhu, Xiangqing; Tang, Guoning; Huang, Ronghuai; Hu, Gang
2004-09-01
A one-way coupled spatiotemporally chaotic map lattice is used to construct cryptosystem. With the combinatorial applications of both chaotic computations and conventional algebraic operations, our system has optimal cryptographic properties much better than the separative applications of known chaotic and conventional methods. We have realized experiments to practice duplex voice secure communications in realistic Wired Public Switched Telephone Network by applying our chaotic system and the system of Advanced Encryption Standard (AES), respectively, for cryptography. Our system can work stably against strong channel noise when AES fails to work.
Objective quality assessment of tone-mapped images.
Yeganeh, Hojatollah; Wang, Zhou
2013-02-01
Tone-mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Different TMOs create different tone-mapped images, and a natural question is which one has the best quality. Without an appropriate quality measure, different TMOs cannot be compared, and further improvement is directionless. Subjective rating may be a reliable evaluation method, but it is expensive and time consuming, and more importantly, is difficult to be embedded into optimization frameworks. Here we propose an objective quality assessment algorithm for tone-mapped images by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure on the basis of intensity statistics of natural images. Validations using independent subject-rated image databases show good correlations between subjective ranking score and the proposed tone-mapped image quality index (TMQI). Furthermore, we demonstrate the extended applications of TMQI using two examples-parameter tuning for TMOs and adaptive fusion of multiple tone-mapped images.
Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs
NASA Astrophysics Data System (ADS)
Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung
2014-05-01
The vast availability and improved quality of optical satellite data and digital elevation models (DEMs), as well as the need for complete and up-to-date landslide inventories at various spatial scales have fostered the development of semi-automated landslide recognition systems. Among the tested approaches for designing such systems, object-based image analysis (OBIA) stepped out to be a highly promising methodology. OBIA offers a flexible, spatially enabled framework for effective landslide mapping. Most object-based landslide mapping systems, however, have been tailored to specific, mainly small-scale study areas or even to single landslides only. Even though reported mapping accuracies tend to be higher than for pixel-based approaches, accuracy values are still relatively low and depend on the particular study. There is still room to improve the applicability and objectivity of object-based landslide mapping systems. The presented study aims at developing a knowledge-based landslide mapping system implemented in an OBIA environment, i.e. Trimble eCognition. In comparison to previous knowledge-based approaches, the classification of segmentation-derived multi-scale image objects relies on digital landslide signatures. These signatures hold the common operational knowledge on digital landslide mapping, as reported by 25 Taiwanese landslide experts during personal semi-structured interviews. Specifically, the signatures include information on commonly used data layers, spectral and spatial features, and feature thresholds. The signatures guide the selection and implementation of mapping rules that were finally encoded in Cognition Network Language (CNL). Multi-scale image segmentation is optimized by using the improved Estimation of Scale Parameter (ESP) tool. The approach described above is developed and tested for mapping landslides in a sub-region of the Baichi catchment in Northern Taiwan based on SPOT imagery and a high-resolution DEM. An object-based accuracy assessment is conducted by quantitatively comparing extracted landslide objects with landslide polygons that were visually interpreted by local experts. The applicability and transferability of the mapping system are evaluated by comparing initial accuracies with those achieved for the following two tests: first, usage of a SPOT image from the same year, but for a different area within the Baichi catchment; second, usage of SPOT images from multiple years for the same region. The integration of the common knowledge via digital landslide signatures is new in object-based landslide studies. In combination with strategies to optimize image segmentation this may lead to a more objective, transferable and stable knowledge-based system for the mapping of landslides from optical satellite data and DEMs.
NASA Astrophysics Data System (ADS)
Petropoulos, George P.; Kontoes, Charalambos C.; Keramitsoglou, Iphigenia
2012-08-01
In this study, the potential of EO-1 Advanced Land Imager (ALI) radiometer for land cover and especially burnt area mapping from a single image analysis is investigated. Co-orbital imagery from the Landsat Thematic Mapper (TM) was also utilised for comparison purposes. Both images were acquired shortly after the suppression of a fire occurred during the summer of 2009 North-East of Athens, the capital of Greece. The Maximum Likelihood (ML), Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) classifiers were parameterised and subsequently applied to the acquired satellite datasets. Evaluation of the land use/cover mapping accuracy was based on the error matrix statistics. Also, the McNemar test was used to evaluate the statistical significance of the differences between the approaches tested. Derived burnt area estimates were validated against the operationally deployed Services and Applications For Emergency Response (SAFER) Burnt Scar Mapping service. All classifiers applied to either ALI or TM imagery proved flexible enough to map land cover and also to extract the burnt area from other land surface types. The highest total classification accuracy and burnt area detection capability was returned from the application of SVMs to ALI data. This was due to the SVMs ability to identify an optimal separating hyperplane for best classes' separation that was able to better utilise ALI's advanced technological characteristics in comparison to those of TM sensor. This study is to our knowledge the first of its kind, effectively demonstrating the benefits of the combined application of SVMs to ALI data further implying that ALI technology may prove highly valuable in mapping burnt areas and land use/cover if it is incorporated into the development of Landsat 8 mission, planned to be launched in the coming years.
Lecours, Vincent; Brown, Craig J; Devillers, Rodolphe; Lucieer, Vanessa L; Edinger, Evan N
2016-01-01
Selecting appropriate environmental variables is a key step in ecology. Terrain attributes (e.g. slope, rugosity) are routinely used as abiotic surrogates of species distribution and to produce habitat maps that can be used in decision-making for conservation or management. Selecting appropriate terrain attributes for ecological studies may be a challenging process that can lead users to select a subjective, potentially sub-optimal combination of attributes for their applications. The objective of this paper is to assess the impacts of subjectively selecting terrain attributes for ecological applications by comparing the performance of different combinations of terrain attributes in the production of habitat maps and species distribution models. Seven different selections of terrain attributes, alone or in combination with other environmental variables, were used to map benthic habitats of German Bank (off Nova Scotia, Canada). 29 maps of potential habitats based on unsupervised classifications of biophysical characteristics of German Bank were produced, and 29 species distribution models of sea scallops were generated using MaxEnt. The performances of the 58 maps were quantified and compared to evaluate the effectiveness of the various combinations of environmental variables. One of the combinations of terrain attributes-recommended in a related study and that includes a measure of relative position, slope, two measures of orientation, topographic mean and a measure of rugosity-yielded better results than the other selections for both methodologies, confirming that they together best describe terrain properties. Important differences in performance (up to 47% in accuracy measurement) and spatial outputs (up to 58% in spatial distribution of habitats) highlighted the importance of carefully selecting variables for ecological applications. This paper demonstrates that making a subjective choice of variables may reduce map accuracy and produce maps that do not adequately represent habitats and species distributions, thus having important implications when these maps are used for decision-making.
A case study in programming a quantum annealer for hard operational planning problems
NASA Astrophysics Data System (ADS)
Rieffel, Eleanor G.; Venturelli, Davide; O'Gorman, Bryan; Do, Minh B.; Prystay, Elicia M.; Smelyanskiy, Vadim N.
2015-01-01
We report on a case study in programming an early quantum annealer to attack optimization problems related to operational planning. While a number of studies have looked at the performance of quantum annealers on problems native to their architecture, and others have examined performance of select problems stemming from an application area, ours is one of the first studies of a quantum annealer's performance on parametrized families of hard problems from a practical domain. We explore two different general mappings of planning problems to quadratic unconstrained binary optimization (QUBO) problems, and apply them to two parametrized families of planning problems, navigation-type and scheduling-type. We also examine two more compact, but problem-type specific, mappings to QUBO, one for the navigation-type planning problems and one for the scheduling-type planning problems. We study embedding properties and parameter setting and examine their effect on the efficiency with which the quantum annealer solves these problems. From these results, we derive insights useful for the programming and design of future quantum annealers: problem choice, the mapping used, the properties of the embedding, and the annealing profile all matter, each significantly affecting the performance.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
Xiao, Shijun; Li, Jiongtang; Ma, Fengshou; Fang, Lujing; Xu, Shuangbin; Chen, Wei; Wang, Zhi Yong
2015-09-03
Large yellow croaker (Larimichthys crocea) is an important commercial fish in China and East-Asia. The annual product of the species from the aqua-farming industry is about 90 thousand tons. In spite of its economic importance, genetic studies of economic traits and genomic selections of the species are hindered by the lack of genomic resources. Specifically, a whole-genome physical map of large yellow croaker is still missing. The traditional BAC-based fingerprint method is extremely time- and labour-consuming. Here we report the first genome map construction using the high-throughput whole-genome mapping technique by nanochannel arrays in BioNano Genomics Irys system. For an optimal marker density of ~10 per 100 kb, the nicking endonuclease Nt.BspQ1 was chosen for the genome map generation. 645,305 DNA molecules with a total length of ~112 Gb were labelled and detected, covering more than 160X of the large yellow croaker genome. Employing IrysView package and signature patterns in raw DNA molecules, a whole-genome map of large yellow croaker was assembled into 686 maps with a total length of 727 Mb, which was consistent with the estimated genome size. The N50 length of the whole-genome map, including 126 maps, was up to 1.7 Mb. The excellent hybrid alignment with large yellow croaker draft genome validated the consensus genome map assembly and highlighted a promising application of whole-genome mapping on draft genome sequence super-scaffolding. The genome map data of large yellow croaker are accessible on lycgenomics.jmu.edu.cn/pm. Using the state-of-the-art whole-genome mapping technique in Irys system, the first whole-genome map for large yellow croaker has been constructed and thus highly facilitates the ongoing genomic and evolutionary studies for the species. To our knowledge, this is the first public report on genome map construction by the whole-genome mapping for aquatic-organisms. Our study demonstrates a promising application of the whole-genome mapping on genome maps construction for other non-model organisms in a fast and reliable manner.
Optimisation d'analyses de grenat almandin realisees au microscope electronique a balayage
NASA Astrophysics Data System (ADS)
Larose, Miguel
The electron microprobe (EMP) is considered as the golden standard for the collection of precise and representative chemical composition of minerals in rocks, but data of similar quality should be obtainable with a scanning electron microscope (SEM). This thesis presents an analytical protocol aimed at optimizing operational parameters of an SEM paired with an EDS Si(Li) X-ray detector (JEOL JSM-840A) for the imaging, quantitative chemical analysis and compositional X-ray maps of almandine garnet found in pelitic schists from the Canadian Cordillera. Results are then compared to those obtained for the same samples on a JEOL JXA 8900 EMP. For imaging purposes, the secondary electrons and backscattered electrons signals have been used to obtain topographic and chemical contrast of the samples, respectively. The SEM allows the acquisition of images with higher resolution than the EMP when working at high magnifications. However, for millimetric size minerals requiring very low magnifications, the EMP can usually match the imaging capabilities of an SEM. When optimizing images for both signals, the optimal operational parameters to show similar contrasts are not restricted to a unique combination of values. Optimization of operational parameters for quantitative chemical analysis resulted in analytical data with a similar precision and showing good correlation to that obtained with an EMP. Optimization of operational parameters for compositional X-ray maps aimed at maximizing the collected intensity within a pixel as well as complying with the spatial resolution criterion in order to obtain a qualitative compositional map representative of the chemical variation within the grain. Even though various corrections were needed, such as the shadow effect and the background noise removal, as well as the impossibility to meet the spatial resolution criterion because of the limited pixel density available on the SEM, the compositional X-ray maps show a good correlation with those obtained with the EMP, even for concentrations as low as 0,5%. When paired with a rigorous analytical protocol, the use of an SEM equipped with an EDS Si (Li) X-ray detector allows the collection of qualitative and quantitative results similar to those obtained with an EMP for all three of the applications considered.
Empty tracks optimization based on Z-Map model
NASA Astrophysics Data System (ADS)
Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao
2017-12-01
For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.
NASA Technical Reports Server (NTRS)
Offield, T. W. (Principal Investigator); Watson, K.; Hummer-Miller, S.
1981-01-01
In the Powder River Basin, Wyo., narrow geologic units having thermal inertias which contrast with their surroundings can be discriminated in optimal images. A few subtle thermal inertia anomalies coincide with areas of helium leakage believed to be associated with deep oil and gas concentrations. The most important results involved delineation of tectonic framework elements some of which were not previously recognized. Thermal and thermal inertia images also permit mapping of geomorphic textural domains. A thermal lineament appears to reveal a basement discontinuity which involves the Homestake Mine in the Black Hill, a zone of Tertiary igneous activity and facies control in oil producing horizons. Applications of these data to the Cabeza Prieta, Ariz., area illustrate their potential for igneous rock type discrimination. Extension to Yellowstone National Park resulted in the detection of additional structural information but surface hydrothermal features could not be distinguished with any confidence. A thermal inertia mapping algorithm, a fast and accurate image registration technique, and an efficient topographic slope and elevation correction method were developed.
Neurocomputing strategies in decomposition based structural design
NASA Technical Reports Server (NTRS)
Szewczyk, Z.; Hajela, P.
1993-01-01
The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.
Optimized efficient liver T1ρ mapping using limited spin lock times
NASA Astrophysics Data System (ADS)
Yuan, Jing; Zhao, Feng; Griffith, James F.; Chan, Queenie; Wang, Yi-Xiang J.
2012-03-01
T1ρ relaxation has recently been found to be sensitive to liver fibrosis and has potential to be used for early detection of liver fibrosis and grading. Liver T1ρ imaging and accurate mapping are challenging because of the long scan time, respiration motion and high specific absorption rate. Reduction and optimization of spin lock times (TSLs) are an efficient way to reduce scan time and radiofrequency energy deposition of T1ρ imaging, but maintain the near-optimal precision of T1ρ mapping. This work analyzes the precision in T1ρ estimation with limited, in particular two, spin lock times, and explores the feasibility of using two specific operator-selected TSLs for efficient and accurate liver T1ρ mapping. Two optimized TSLs were derived by theoretical analysis and numerical simulations first, and tested experimentally by in vivo rat liver T1ρ imaging at 3 T. The simulation showed that the TSLs of 1 and 50 ms gave optimal T1ρ estimation in a range of 10-100 ms. In the experiment, no significant statistical difference was found between the T1ρ maps generated using the optimized two-TSL combination and the maps generated using the six TSLs of [1, 10, 20, 30, 40, 50] ms according to one-way ANOVA analysis (p = 0.1364 for liver and p = 0.8708 for muscle).
Gahm, Jin Kyu; Shi, Yonggang
2018-01-01
Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer’s disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. PMID:29574399
Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas
NASA Astrophysics Data System (ADS)
Pawłuszek, K.; Borkowski, A.; Tarolli, P.
2017-05-01
Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.
1981-12-01
file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler
Mission Adaptive UAS Platform for Earth Science Resource Assessment
NASA Technical Reports Server (NTRS)
Dunagan, S.; Fladeland, M.; Ippolito, C.; Knudson, M.
2015-01-01
NASA Ames Research Center has led a number of important Earth science remote sensing missions including several directed at the assessment of natural resources. A key asset for accessing high risk airspace has been the 180 kg class SIERRA UAS platform, providing mission durations of up to 8 hrs at altitudes up to 3 km. Recent improvements to this mission capability are embodied in the incipient SIERRA-B variant. Two resource mapping problems having unusual mission characteristics requiring a mission adaptive capability are explored here. One example involves the requirement for careful control over solar angle geometry for passive reflectance measurements. This challenges the management of resources in the coastal ocean where solar angle combines with sea state to produce surface glint that can obscure the ocean color signal. Furthermore, as for all scanning imager applications, the primary flight control priority to fly the UAS directly to the next waypoint should compromise with the requirement to minimize roll and crab effects in the imagery. A second example involves the mapping of natural resources in the Earth's crust using precision magnetometry. In this case the vehicle flight path must be oriented to optimize magnetic flux gradients over a spatial domain having continually emerging features, while optimizing the efficiency of the spatial mapping task. These requirements were highlighted in several recent Earth Science missions including the October 2013 OCEANIA mission directed at improving the capability for hyperspectral reflectance measurements in the coastal ocean, and the Surprise Valley Mission directed at mapping sub-surface mineral composition and faults, using high-sensitivity magentometry. This paper reports the development of specific aircraft control approaches to incorporate the unusual and demanding requirements to manage solar angle, aircraft attitude and flight path orientation, and efficient (directly geo-rectified) surface and sub-surface mapping, including the near-time optimization of these sometimes conflicting requirements. *
Optimal mapping of irregular finite element domains to parallel processors
NASA Technical Reports Server (NTRS)
Flower, J.; Otto, S.; Salama, M.
1987-01-01
Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.
NASA Technical Reports Server (NTRS)
Sensmeier, Mark D.; Samareh, Jamshid A.
2005-01-01
An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
In Vivo Fiber-Optic Raman Mapping Of Metastases In Mouse Brains
NASA Astrophysics Data System (ADS)
Stelling, A.; Kirsch, M.; Steiner, G.; Krafft, C.; Schackert, G.; Salzer, R.
2010-08-01
Vibrational spectroscopy, in particular Raman spectroscopy, has potential applications in the field of in vivo diagnostics. Raman and FT-IR spectroscopy analyze the complete biochemical information at any given pixel within the visual field. Here we demonstrate the feasibility of performing Raman spectroscopic measurements on living mice brains using a fiber-optic probe with a nominal spatial resolution of 60 μm. The objectives of this study were to 1) evaluate preclinical models, namely murine brain slices containing experimental tumors, 2) optimize the preparation of pristine brain tissue to obtain reference information, to 3) optimize the conditions for introducing a fiber-optic probe to acquire Raman maps in vivo, and 4) to transfer results obtained from human brain tumors to an animal model. Disseminated brain metastases of malignant melanomas were induced by injecting tumor cells into the carotid artery of mice. The procedure mimicked hematogenous tumor spread in one brain hemisphere while the other hemisphere remained tumor free. Three series of sections were prepared consecutively from whole mouse brains: pristine, 2-mm thick sections for Raman mapping and dried, thin sections for FT-IR imaging, hematoxylin and eosin-stained thin sections for histopathological assessment. Raman maps were collected serially using a spectrometer coupled to a fiber-optic probe. FT-IR images were recorded using a spectrometer with a multi-channel detector. The FT-IR images and the Raman maps were evaluated by multivariate data analysis. The results obtained from the thin section studies were employed to guide measurements of murine brains in vivo. Raman maps with an acquisition time of over an hour could be performed on the living animals. No damage to the tissue was observed.
SIFT optimization and automation for matching images from multiple temporal sources
NASA Astrophysics Data System (ADS)
Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio
2017-05-01
Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-08-12
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-01-01
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
NASA Astrophysics Data System (ADS)
She, Yuchen; Li, Shuang
2018-01-01
The planning algorithm to calculate a satellite's optimal slew trajectory with a given keep-out constraint is proposed. An energy-optimal formulation is proposed for the Space-based multiband astronomical Variable Objects Monitor Mission Analysis and Planning (MAP) system. The innovative point of the proposed planning algorithm lies in that the satellite structure and control limitation are not considered as optimization constraints but are formulated into the cost function. This modification is able to relieve the burden of the optimizer and increases the optimization efficiency, which is the major challenge for designing the MAP system. Mathematical analysis is given to prove that there is a proportional mapping between the formulation and the satellite controller output. Simulations with different scenarios are given to demonstrate the efficiency of the developed algorithm.
Generating Multi-Destination Maps.
Zhang, Junsong; Fan, Jiepeng; Luo, Zhenshan
2017-08-01
Multi-destination maps are a kind of navigation maps aimed to guide visitors to multiple destinations within a region, which can be of great help to urban visitors. However, they have not been developed in the current online map service. To address this issue, we introduce a novel layout model designed especially for generating multi-destination maps, which considers the global and local layout of a multi-destination map. We model the layout problem as a graph drawing that satisfies a set of hard and soft constraints. In the global layout phase, we balance the scale factor between ROIs. In the local layout phase, we make all edges have good visibility and optimize the map layout to preserve the relative length and angle of roads. We also propose a perturbation-based optimization method to find an optimal layout in the complex solution space. The multi-destination maps generated by our system are potential feasible on the modern mobile devices and our result can show an overview and a detail view of the whole map at the same time. In addition, we perform a user study to evaluate the effectiveness of our method, and the results prove that the multi-destination maps achieve our goals well.
A systematic conservation planning approach to fire risk management in Natura 2000 sites.
Foresta, Massimiliano; Carranza, Maria Laura; Garfì, Vittorio; Di Febbraro, Mirko; Marchetti, Marco; Loy, Anna
2016-10-01
A primary challenge in conservation biology is to preserve the most representative biodiversity while simultaneously optimizing the efforts associated with conservation. In Europe, the implementation of the Natura 2000 network requires protocols to recognize and map threats to biodiversity and to identify specific mitigation actions. We propose a systematic conservation planning approach to optimize management actions against specific threats based on two fundamental parameters: biodiversity values and threat pressure. We used the conservation planning software Marxan to optimize a fire management plan in a Natura 2000 coastal network in southern Italy. We address three primary questions: i) Which areas are at high fire risk? ii) Which areas are the most valuable for threatened biodiversity? iii) Which areas should receive priority risk-mitigation actions for the optimal effect?, iv) which fire-prevention actions are feasible in the management areas?. The biodiversity values for the Natura 2000 spatial units were derived from the distribution maps of 18 habitats and 89 vertebrate species of concern in Europe (Habitat Directive 92/43/EEC). The threat pressure map, defined as fire probability, was obtained from digital layers of fire risk and of fire frequency. Marxan settings were defined as follows: a) planning units of 40 × 40 m, b) conservation features defined as all habitats and vertebrate species of European concern occurring in the study area, c) conservation targets defined according with fire sensitivity and extinction risk of conservation features, and d) costs determined as the complement of fire probabilities. We identified 23 management areas in which to concentrate efforts for the optimal reduction of fire-induced effects. Because traditional fire prevention is not feasible for most of policy habitats included in the management areas, alternative prevention practices were identified that allows the conservation of the vegetation structure. The proposed approach has potential applications for multiple landscapes, threats and spatial scales and could be extended to other valuable natural areas, including protected areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gahm, Jin Kyu; Shi, Yonggang
2018-05-01
Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer's disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. Copyright © 2018 Elsevier B.V. All rights reserved.
Ormes, James D; Zhang, Dan; Chen, Alex M; Hou, Shirley; Krueger, Davida; Nelson, Todd; Templeton, Allen
2013-02-01
There has been a growing interest in amorphous solid dispersions for bioavailability enhancement in drug discovery. Spray drying, as shown in this study, is well suited to produce prototype amorphous dispersions in the Candidate Selection stage where drug supply is limited. This investigation mapped the processing window of a micro-spray dryer to achieve desired particle characteristics and optimize throughput/yield. Effects of processing variables on the properties of hypromellose acetate succinate were evaluated by a fractional factorial design of experiments. Parameters studied include solid loading, atomization, nozzle size, and spray rate. Response variables include particle size, morphology and yield. Unlike most other commercial small-scale spray dryers, the ProCepT was capable of producing particles with a relatively wide mean particle size, ca. 2-35 µm, allowing material properties to be tailored to support various applications. In addition, an optimized throughput of 35 g/hour with a yield of 75-95% was achieved, which affords to support studies from Lead-identification/Lead-optimization to early safety studies. A regression model was constructed to quantify the relationship between processing parameters and the response variables. The response surface curves provide a useful tool to design processing conditions, leading to a reduction in development time and drug usage to support drug discovery.
Fundamental procedures of geographic information analysis
NASA Technical Reports Server (NTRS)
Berry, J. K.; Tomlin, C. D.
1981-01-01
Analytical procedures common to most computer-oriented geographic information systems are composed of fundamental map processing operations. A conceptual framework for such procedures is developed and basic operations common to a broad range of applications are described. Among the major classes of primitive operations identified are those associated with: reclassifying map categories as a function of the initial classification, the shape, the position, or the size of the spatial configuration associated with each category; overlaying maps on a point-by-point, a category-wide, or a map-wide basis; measuring distance; establishing visual or optimal path connectivity; and characterizing cartographic neighborhoods based on the thematic or spatial attributes of the data values within each neighborhood. By organizing such operations in a coherent manner, the basis for a generalized cartographic modeling structure can be developed which accommodates a variety of needs in a common, flexible and intuitive manner. The use of each is limited only by the general thematic and spatial nature of the data to which it is applied.
Optimal mapping of neural-network learning on message-passing multicomputers
NASA Technical Reports Server (NTRS)
Chu, Lon-Chan; Wah, Benjamin W.
1992-01-01
A minimization of learning-algorithm completion time is sought in the present optimal-mapping study of the learning process in multilayer feed-forward artificial neural networks (ANNs) for message-passing multicomputers. A novel approximation algorithm for mappings of this kind is derived from observations of the dominance of a parallel ANN algorithm over its communication time. Attention is given to both static and dynamic mapping schemes for systems with static and dynamic background workloads, as well as to experimental results obtained for simulated mappings on multicomputers with dynamic background workloads.
Nonlinear equation of the modes in circular slab waveguides and its application.
Zhu, Jianxin; Zheng, Jia
2013-11-20
In this paper, circularly curved inhomogeneous waveguides are transformed into straight inhomogeneous waveguides first by a conformal mapping. Then, the differential transfer matrix method is introduced and adopted to deduce the exact dispersion relation for modes. This relation itself is complex and difficult to solve, but it can be approximated by a simpler nonlinear equation in practical applications, which is close to the exact relation and quite easy to analyze. Afterward, optimized asymptotic solutions are obtained and act as initial guesses for the following Newton's iteration. Finally, very accurate solutions are achieved in the numerical experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halavanau, A.; Hyun, J.; Mihalcea, D.
A photocathode, immersed in solenoidal magnetic field, can produce canonical-angular-momentum (CAM) dominated or “magnetized” electron beams. Such beams have an application in electron cooling of hadron beams and can also be uncoupled to yield asymmetric-emittance (“flat”) beams. In the present paper we explore the possibilities of the flat beam generation at Fermilab’s Accelerator Science and Technology (FAST) facility. We present optimization of the beam flatness and four-dimensional transverse emittance and investigate the mapping and its limitations of the produced eigen-emittances to conventional emittances using a skew-quadrupole channel. Possible application of flat beams at the FAST facility are also discussed.
Shape optimization of disc-type flywheels
NASA Technical Reports Server (NTRS)
Nizza, R. S.
1976-01-01
Techniques were developed for presenting an analytical and graphical means for selecting an optimum flywheel system design, based on system requirements, geometric constraints, and weight limitations. The techniques for creating an analytical solution are formulated from energy and structural principals. The resulting flywheel design relates stress and strain pattern distribution, operating speeds, geometry, and specific energy levels. The design techniques incorporate the lowest stressed flywheel for any particular application and achieve the highest specific energy per unit flywheel weight possible. Stress and strain contour mapping and sectional profile plotting reflect the results of the structural behavior manifested under rotating conditions. This approach toward flywheel design is applicable to any metal flywheel, and permits the selection of the flywheel design to be based solely on the criteria of the system requirements that must be met, those that must be optimized, and those system parameters that may be permitted to vary.
Maessen, J G; Phelps, B; Dekker, A L A J; Dijkman, B
2004-05-01
To optimize resynchronization in biventricular pacing with epicardial leads, mapping to determine the best pacing site, is a prerequisite. A port access surgical mapping technique was developed that allowed multiple pace site selection and reproducible lead evaluation and implantation. Pressure-volume loops analysis was used for real time guidance in targeting epicardial lead placement. Even the smallest changes in lead position revealed significantly different functional results. Optimizing the pacing site with this technique allowed functional improvement up to 40% versus random pace site selection.
Development of optimized segmentation map in dual energy computed tomography
NASA Astrophysics Data System (ADS)
Yamakawa, Keisuke; Ueki, Hironori
2012-03-01
Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.
Continuous roll-to-roll growth of graphene films by chemical vapor deposition
NASA Astrophysics Data System (ADS)
Hesjedal, Thorsten
2011-03-01
Few-layer graphene is obtained in atmospheric chemical vapor deposition on polycrystalline copper in a roll-to-roll process. Raman and x-ray photoelectron spectroscopy were employed to confirm the few-layer nature of the graphene film, to map the inhomogeneities, and to study and optimize the growth process. This continuous growth process can be easily scaled up and enables the low-cost fabrication of graphene films for industrial applications.
Construction of Optimal-Path Maps for Homogeneous-Cost-Region Path-Planning Problems
1989-09-01
of Artificial Inteligence , 9%,4. 24. Kirkpatrick, S., Gelatt Jr., C. D., and Vecchi, M. P., "Optinization by Sinmulated Ani- nealing", Science, Vol...studied in depth by researchers in such fields as artificial intelligence, robot;cs, and computa- tional geometry. Most methods require homogeneous...the results of the research. 10 U. L SLEVANT RESEARCH A. APPLICABLE CONCEPTS FROM ARTIFICIAL INTELLIGENCE 1. Search Methods One of the central
2015-08-01
optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on
Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y
2014-01-01
Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
NASA Astrophysics Data System (ADS)
Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas
2010-11-01
Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Listyorini, Tri; Muzid, Syafiul
2017-06-01
The promotion team of Muria Kudus University (UMK) has done annual promotion visit to several senior high schools in Indonesia. The visits were done to numbers of schools in Kudus, Jepara, Demak, Rembang and Purwodadi. To simplify the visit, each visit round is limited to 15 (fifteen) schools. However, the team frequently faces some obstacles during the visit, particularly in determining the route that they should take toward the targeted school. It is due to the long distance or the difficult route to reach the targeted school that leads to elongated travel duration and inefficient fuel cost. To solve these problems, the development of a certain application using heuristic genetic algorithm method based on the dynamic of population size or Population Resizing on Fitness lmprovement Genetic Algorithm (PRoFIGA), was done. This android-based application was developed to make the visit easier and to determine a shorter route for the team, hence, the visiting period will be effective and efficient. The result of this research was an android-based application to determine the shortest route by combining heuristic method and Google Maps Application Programming lnterface (API) that display the route options for the team.
NASA Astrophysics Data System (ADS)
Aydogan, D.
2007-04-01
An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more powerful than the classical methods.
Dou, Jie; Tien Bui, Dieu; Yunus, Ali P; Jia, Kun; Song, Xuan; Revhaug, Inge; Xia, Huan; Zhu, Zhongfan
2015-01-01
This paper assesses the potentiality of certainty factor models (CF) for the best suitable causative factors extraction for landslide susceptibility mapping in the Sado Island, Niigata Prefecture, Japan. To test the applicability of CF, a landslide inventory map provided by National Research Institute for Earth Science and Disaster Prevention (NIED) was split into two subsets: (i) 70% of the landslides in the inventory to be used for building the CF based model; (ii) 30% of the landslides to be used for the validation purpose. A spatial database with fifteen landslide causative factors was then constructed by processing ALOS satellite images, aerial photos, topographical and geological maps. CF model was then applied to select the best subset from the fifteen factors. Using all fifteen factors and the best subset factors, landslide susceptibility maps were produced using statistical index (SI) and logistic regression (LR) models. The susceptibility maps were validated and compared using landslide locations in the validation data. The prediction performance of two susceptibility maps was estimated using the Receiver Operating Characteristics (ROC). The result shows that the area under the ROC curve (AUC) for the LR model (AUC = 0.817) is slightly higher than those obtained from the SI model (AUC = 0.801). Further, it is noted that the SI and LR models using the best subset outperform the models using the fifteen original factors. Therefore, we conclude that the optimized factor model using CF is more accurate in predicting landslide susceptibility and obtaining a more homogeneous classification map. Our findings acknowledge that in the mountainous regions suffering from data scarcity, it is possible to select key factors related to landslide occurrence based on the CF models in a GIS platform. Hence, the development of a scenario for future planning of risk mitigation is achieved in an efficient manner.
Dou, Jie; Tien Bui, Dieu; P. Yunus, Ali; Jia, Kun; Song, Xuan; Revhaug, Inge; Xia, Huan; Zhu, Zhongfan
2015-01-01
This paper assesses the potentiality of certainty factor models (CF) for the best suitable causative factors extraction for landslide susceptibility mapping in the Sado Island, Niigata Prefecture, Japan. To test the applicability of CF, a landslide inventory map provided by National Research Institute for Earth Science and Disaster Prevention (NIED) was split into two subsets: (i) 70% of the landslides in the inventory to be used for building the CF based model; (ii) 30% of the landslides to be used for the validation purpose. A spatial database with fifteen landslide causative factors was then constructed by processing ALOS satellite images, aerial photos, topographical and geological maps. CF model was then applied to select the best subset from the fifteen factors. Using all fifteen factors and the best subset factors, landslide susceptibility maps were produced using statistical index (SI) and logistic regression (LR) models. The susceptibility maps were validated and compared using landslide locations in the validation data. The prediction performance of two susceptibility maps was estimated using the Receiver Operating Characteristics (ROC). The result shows that the area under the ROC curve (AUC) for the LR model (AUC = 0.817) is slightly higher than those obtained from the SI model (AUC = 0.801). Further, it is noted that the SI and LR models using the best subset outperform the models using the fifteen original factors. Therefore, we conclude that the optimized factor model using CF is more accurate in predicting landslide susceptibility and obtaining a more homogeneous classification map. Our findings acknowledge that in the mountainous regions suffering from data scarcity, it is possible to select key factors related to landslide occurrence based on the CF models in a GIS platform. Hence, the development of a scenario for future planning of risk mitigation is achieved in an efficient manner. PMID:26214691
Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.
Song, Ci; Dai, Yifan; Peng, Xiaoqiang
2010-07-01
Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.
Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System
NASA Astrophysics Data System (ADS)
Nouiraa, H.; Deschaud, J. E.; Goulettea, F.
2016-06-01
LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.
Arooj, Mahreen; Sakkiah, Sugunadevi; Cao, Guang ping; Lee, Keun Woo
2013-01-01
Due to the diligence of inherent redundancy and robustness in many biological networks and pathways, multitarget inhibitors present a new prospect in the pharmaceutical industry for treatment of complex diseases. Nevertheless, to design multitarget inhibitors is concurrently a great challenge for medicinal chemists. We have developed a novel computational approach by integrating the affinity predictions from structure-based virtual screening with dual ligand-based pharmacophore to discover potential dual inhibitors of human Thymidylate synthase (hTS) and human dihydrofolate reductase (hDHFR). These are the key enzymes in folate metabolic pathway that is necessary for the biosynthesis of RNA, DNA, and protein. Their inhibition has found clinical utility as antitumor, antimicrobial, and antiprotozoal agents. A druglike database was utilized to perform dual-target docking studies. Hits identified through docking experiments were mapped over a dual pharmacophore which was developed from experimentally known dual inhibitors of hTS and hDHFR. Pharmacophore mapping procedure helped us in eliminating the compounds which do not possess basic chemical features necessary for dual inhibition. Finally, three structurally diverse hit compounds that showed key interactions at both active sites, mapped well upon the dual pharmacophore, and exhibited lowest binding energies were regarded as possible dual inhibitors of hTS and hDHFR. Furthermore, optimization studies were performed for final dual hit compound and eight optimized dual hits demonstrating excellent binding features at target systems were also regarded as possible dual inhibitors of hTS and hDHFR. In general, the strategy used in the current study could be a promising computational approach and may be generally applicable to other dual target drug designs.
Arooj, Mahreen; Sakkiah, Sugunadevi; Cao, Guang ping; Lee, Keun Woo
2013-01-01
Due to the diligence of inherent redundancy and robustness in many biological networks and pathways, multitarget inhibitors present a new prospect in the pharmaceutical industry for treatment of complex diseases. Nevertheless, to design multitarget inhibitors is concurrently a great challenge for medicinal chemists. We have developed a novel computational approach by integrating the affinity predictions from structure-based virtual screening with dual ligand-based pharmacophore to discover potential dual inhibitors of human Thymidylate synthase (hTS) and human dihydrofolate reductase (hDHFR). These are the key enzymes in folate metabolic pathway that is necessary for the biosynthesis of RNA, DNA, and protein. Their inhibition has found clinical utility as antitumor, antimicrobial, and antiprotozoal agents. A druglike database was utilized to perform dual-target docking studies. Hits identified through docking experiments were mapped over a dual pharmacophore which was developed from experimentally known dual inhibitors of hTS and hDHFR. Pharmacophore mapping procedure helped us in eliminating the compounds which do not possess basic chemical features necessary for dual inhibition. Finally, three structurally diverse hit compounds that showed key interactions at both active sites, mapped well upon the dual pharmacophore, and exhibited lowest binding energies were regarded as possible dual inhibitors of hTS and hDHFR. Furthermore, optimization studies were performed for final dual hit compound and eight optimized dual hits demonstrating excellent binding features at target systems were also regarded as possible dual inhibitors of hTS and hDHFR. In general, the strategy used in the current study could be a promising computational approach and may be generally applicable to other dual target drug designs. PMID:23577115
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067
Pei, Yan
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.
NASA Technical Reports Server (NTRS)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.
Space mapping method for the design of passive shields
NASA Astrophysics Data System (ADS)
Sergeant, Peter; Dupré, Luc; Melkebeek, Jan
2006-04-01
The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.
Travagliati, Marco; Girardo, Salvatore; Pisignano, Dario; Beltram, Fabio; Cecchini, Marco
2013-09-03
Spatiotemporal image correlation spectroscopy (STICS) is a simple and powerful technique, well established as a tool to probe protein dynamics in cells. Recently, its potential as a tool to map velocity fields in lab-on-a-chip systems was discussed. However, the lack of studies on its performance has prevented its use for microfluidics applications. Here, we systematically and quantitatively explore STICS microvelocimetry in microfluidic devices. We exploit a simple experimental setup, based on a standard bright-field inverted microscope (no fluorescence required) and a high-fps camera, and apply STICS to map liquid flow in polydimethylsiloxane (PDMS) microchannels. Our data demonstrates optimal 2D velocimetry up to 10 mm/s flow and spatial resolution down to 5 μm.
New vistas in refractive laser beam shaping with an analytic design approach
NASA Astrophysics Data System (ADS)
Duerr, Fabian; Thienpont, Hugo
2014-05-01
Many commercial, medical and scientific applications of the laser have been developed since its invention. Some of these applications require a specific beam irradiance distribution to ensure optimal performance. Often, it is possible to apply geometrical methods to design laser beam shapers. This common design approach is based on the ray mapping between the input plane and the output beam. Geometric ray mapping designs with two plano-aspheric lenses have been thoroughly studied in the past. Even though analytic expressions for various ray mapping functions do exist, the surface profiles of the lenses are still calculated numerically. In this work, we present an alternative novel design approach that allows direct calculation of the rotational symmetric lens profiles described by analytic functions. Starting from the example of a basic beam expander, a set of functional differential equations is derived from Fermat's principle. This formalism allows calculating the exact lens profiles described by Taylor series coefficients up to very high orders. To demonstrate the versatility of this new approach, two further cases are solved: a Gaussian to at-top irradiance beam shaping system, and a beam shaping system that generates a more complex dark-hollow Gaussian (donut-like) irradiance profile with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of all calculated solutions and indicate the potential of this design approach for refractive beam shaping applications.
An automated approach for tone mapping operator parameter adjustment in security applications
NASA Astrophysics Data System (ADS)
Krasula, LukáÅ.¡; Narwaria, Manish; Le Callet, Patrick
2014-05-01
High Dynamic Range (HDR) imaging has been gaining popularity in recent years. Different from the traditional low dynamic range (LDR), HDR content tends to be visually more appealing and realistic as it can represent the dynamic range of the visual stimuli present in the real world. As a result, more scene details can be faithfully reproduced. As a direct consequence, the visual quality tends to improve. HDR can be also directly exploited for new applications such as video surveillance and other security tasks. Since more scene details are available in HDR, it can help in identifying/tracking visual information which otherwise might be difficult with typical LDR content due to factors such as lack/excess of illumination, extreme contrast in the scene, etc. On the other hand, with HDR, there might be issues related to increased privacy intrusion. To display the HDR content on the regular screen, tone-mapping operators (TMO) are used. In this paper, we present the universal method for TMO parameters tuning, in order to maintain as many details as possible, which is desirable in security applications. The method's performance is verified on several TMOs by comparing the outcomes from tone-mapping with default and optimized parameters. The results suggest that the proposed approach preserves more information which could be of advantage for security surveillance but, on the other hand, makes us consider possible increase in privacy intrusion.
NASA Astrophysics Data System (ADS)
Hostache, Renaud; Chini, Marco; Matgen, Patrick; Giustarini, Laura
2013-04-01
There is a clear need for developing innovative processing chains based on earth observation (EO) data to generate products supporting emergency response and flood management at a global scale. Here an automatic flood mapping application is introduced. The latter is currently hosted on the Grid Processing on Demand (G-POD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver flooded areas using both recent and historical acquisitions of SAR data in an operational framework. It is worth mentioning that the method can be applied to both medium and high resolution SAR images. The flood mapping application consists of two main blocks: 1) A set of query tools for selecting the "crisis image" and the optimal corresponding pre-flood "reference image" from the G-POD archive. 2) An algorithm for extracting flooded areas using the previously selected "crisis image" and "reference image". The proposed method is a hybrid methodology, which combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. The method is based on the calibration of a statistical distribution of "open water" backscatter values inferred from SAR images of floods. Change detection with respect to a pre-flood reference image helps reducing over-detection of inundated areas. The algorithms are computationally efficient and operate with minimum data requirements, considering as input data a flood image and a reference image. Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate pre-flood reference image. Potential users will also be able to apply the implemented flood delineation algorithm. Case studies of several recent high magnitude flooding events (e.g. July 2007 Severn River flood, UK and March 2010 Red River flood, US) observed by high-resolution SAR sensors as well as airborne photography highlight advantages and limitations of the online application. A mid-term target is the exploitation of ESA SENTINEL 1 SAR data streams. In the long term it is foreseen to develop a potential extension of the application for systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis. On-going research activities investigate the usefulness of the method for mapping flood hazard at global scale using databases of historic SAR remote sensing-derived flood inundation maps.
NASA Astrophysics Data System (ADS)
Mirkamali, M. S.; Keshavarz FK, N.; Bakhtiari, M. R.
2013-02-01
Faults, as main pathways for fluids, play a critical role in creating regions of high porosity and permeability, in cutting cap rock and in the migration of hydrocarbons into the reservoir. Therefore, accurate identification of fault zones is very important in maximizing production from petroleum traps. Image processing and modern visualization techniques are provided for better mapping of objects of interest. In this study, the application of fault mapping in the identification of fault zones within the Mishan and Aghajari formations above the Guri base unconformity surface in the eastern part of Persian Gulf is investigated. Seismic single- and multi-trace attribute analyses are employed separately to determine faults in a vertical section, but different kinds of geological objects cannot be identified using individual attributes only. A mapping model is utilized to improve the identification of the faults, giving more accurate results. This method is based on combinations of all individual relevant attributes using a neural network system to create combined attributes, which gives an optimal view of the object of interest. Firstly, a set of relevant attributes were separately calculated on the vertical section. Then, at interpreted positions, some example training locations were manually selected in each fault and non-fault class by an interpreter. A neural network was trained on combinations of the attributes extracted at the example training locations to generate an optimized fault cube. Finally, the results of the fault and nonfault probability cube were estimated, which the neural network applied to the entire data set. The fault probability cube was obtained with higher mapping accuracy and greater contrast, and with fewer disturbances in comparison with individual attributes. The computed results of this study can support better understanding of the data, providing fault zone mapping with reliable results.
Kozhuharova, Ana; Sharma, Harshita; Ohyama, Takako; Fasolo, Francesca; Yamazaki, Toshio; Cotella, Diego; Santoro, Claudio; Zucchelli, Silvia; Gustincich, Stefano; Carninci, Piero
2018-01-01
SINEUPs are antisense long noncoding RNAs, in which an embedded SINE B2 element UP-regulates translation of partially overlapping target sense mRNAs. SINEUPs contain two functional domains. First, the binding domain (BD) is located in the region antisense to the target, providing specific targeting to the overlapping mRNA. Second, the inverted SINE B2 represents the effector domain (ED) and enhances translation. To adapt SINEUP technology to a broader number of targets, we took advantage of a high-throughput, semi-automated imaging system to optimize synthetic SINEUP BD and ED design in HEK293T cell lines. Using SINEUP-GFP as a model SINEUP, we extensively screened variants of the BD to map features needed for optimal design. We found that most active SINEUPs overlap an AUG-Kozak sequence. Moreover, we report our screening of the inverted SINE B2 sequence to identify active sub-domains and map the length of the minimal active ED. Our synthetic SINEUP-GFP screening of both BDs and EDs constitutes a broad test with flexible applications to any target gene of interest. PMID:29414979
Multigrid optimal mass transport for image registration and morphing
NASA Astrophysics Data System (ADS)
Rehman, Tauseef ur; Tannenbaum, Allen
2007-02-01
In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.
A novel neural network for variational inequalities with linear and nonlinear constraints.
Gao, Xing-Bao; Liao, Li-Zhi; Qi, Liqun
2005-11-01
Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild condition by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples.
A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences
NASA Astrophysics Data System (ADS)
Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert
2011-09-01
Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.
NASA Technical Reports Server (NTRS)
Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik
1991-01-01
A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.
Response Surface Methods for Spatially-Resolved Optical Measurement Techniques
NASA Technical Reports Server (NTRS)
Danehy, P. M.; Dorrington, A. A.; Cutler, A. D.; DeLoach, R.
2003-01-01
Response surface methods (or methodology), RSM, have been applied to improve data quality for two vastly different spatial ly-re solved optical measurement techniques. In the first application, modern design of experiments (MDOE) methods, including RSM, are employed to map the temperature field in a direct-connect supersonic combustion test facility at NASA Langley Research Center. The laser-based measurement technique known as coherent anti-Stokes Raman spectroscopy (CARS) is used to measure temperature at various locations in the combustor. RSM is then used to develop temperature maps of the flow. Even though the temperature fluctuations at a single point in the flowfield have a standard deviation on the order of 300 K, RSM provides analytic fits to the data having 95% confidence interval half width uncertainties in the fit as low as +/-30 K. Methods of optimizing future CARS experiments are explored. The second application of RSM is to quantify the shape of a 5-meter diameter, ultra-light, inflatable space antenna at NASA Langley Research Center.
Design and Application of Hybrid Magnetic Field-Eddy Current Probe
NASA Technical Reports Server (NTRS)
Wincheski, Buzz; Wallace, Terryl; Newman, Andy; Leser, Paul; Simpson, John
2013-01-01
The incorporation of magnetic field sensors into eddy current probes can result in novel probe designs with unique performance characteristics. One such example is a recently developed electromagnetic probe consisting of a two-channel magnetoresistive sensor with an embedded single-strand eddy current inducer. Magnetic flux leakage maps of ferrous materials are generated from the DC sensor response while high-resolution eddy current imaging is simultaneously performed at frequencies up to 5 megahertz. In this work the design and optimization of this probe will be presented, along with an application toward analysis of sensory materials with embedded ferromagnetic shape-memory alloy (FSMA) particles. The sensory material is designed to produce a paramagnetic to ferromagnetic transition in the FSMA particles under strain. Mapping of the stray magnetic field and eddy current response of the sample with the hybrid probe can thereby image locations in the structure which have experienced an overstrain condition. Numerical modeling of the probe response is performed with good agreement with experimental results.
Lo, T Y; Sim, K S; Tso, C P; Nia, M E
2014-01-01
An improvement to the previously proposed adaptive Canny optimization technique for scanning electron microscope image colorization is reported. The additional feature, called pseudo-mapping technique, is that the grayscale markings are temporarily mapped to a set of pre-defined pseudo-color map as a mean to instill color information for grayscale colors in chrominance channels. This allows the presence of grayscale markings to be identified; hence optimization colorization of grayscale colors is made possible. This additional feature enhances the flexibility of scanning electron microscope image colorization by providing wider range of possible color enhancement. Furthermore, the nature of this technique also allows users to adjust the luminance intensities of selected region from the original image within certain extent. © 2014 Wiley Periodicals, Inc.
Constellation labeling optimization for bit-interleaved coded APSK
NASA Astrophysics Data System (ADS)
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
Infrared image enhancement using H(infinity) bounds for surveillance applications.
Qidwai, Uvais
2008-08-01
In this paper, two algorithms have been presented to enhance the infrared (IR) images. Using the autoregressive moving average model structure and H(infinity) optimal bounds, the image pixels are mapped from the IR pixel space into normal optical image space, thus enhancing the IR image for improved visual quality. Although H(infinity)-based system identification algorithms are very common now, they are not quite suitable for real-time applications owing to their complexity. However, many variants of such algorithms are possible that can overcome this constraint. Two such algorithms have been developed and implemented in this paper. Theoretical and algorithmic results show remarkable enhancement in the acquired images. This will help in enhancing the visual quality of IR images for surveillance applications.
On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery
Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang
2018-01-01
With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585
On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.
Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang
2018-04-25
With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.
NASA Astrophysics Data System (ADS)
Baranowski, Z.; Canali, L.; Toebbicke, R.; Hrivnac, J.; Barberis, D.
2017-10-01
This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions of event records, each of which consists of ∼100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. We report also on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.
The approach for shortest paths in fire succor based on component GIS technology
NASA Astrophysics Data System (ADS)
Han, Jie; Zhao, Yong; Dai, K. W.
2007-06-01
Fire safety is an important issue for the national economy and people's living. Efficiency and exactness of fire department succor directly relate to safety of peoples' lives and property. Many disadvantages of the traditional fire system have been emerged in practical applications. The preparation of pumpers is guided by wireless communication or wire communication, so its real-time and accurate performances are much poorer. The information about the reported fire, such as the position, disaster and map, et al., for alarm and command was processed by persons, which slows the reaction speed and delays the combat opportunity. In order to solve these disadvantages, it has an important role to construct a modern fire command center based on high technology. The construction of modern fire command center can realize the modernization and automation of fire command and management. It will play a great role in protecting safety of peoples' lives and property. The center can enhance battle ability and can reduce the direct and indirect loss of fire damage at most. With the development of science technology, Geographic Information System (GIS) has becoming a new information industry for hardware production, software development, data collection, space analysis and counseling. With the popularization of computers and the development of GIS, GIS has gained increasing broad applications for its strong functionality. Network analysis is one of the most important functions of GIS, and the most elementary and pivotal issue of network analysis is the calculation of shortest paths. The shortest paths are mostly applied to some emergent systems such as 119 fire alarms. These systems mainly require that the computation time of the optimal path should be 1-3 seconds. And during traveling, the next running path of the vehicles should be calculated in time. So the implement of the shortest paths must have a high efficiency. In this paper, the component GIS technology was applied to collect and record the data information (such as, the situation of this disaster, map and road status et al) of the reported fire firstly. The ant colony optimization was used to calculate the shortest path of fire succor secondly. The optimization results were sent to the pumpers, which can let pumpers choose the shortest paths intelligently and come to fire position with least time. The programming method for shortest paths is proposed in section 3. There are three parts in this section. The elementary framework of the proposed programming method is presented in part one. The systematic framework of GIS component is described in part two. The ant colony optimization employed is presented in part three. In section 4, a simple application instance was presented to demonstrate the proposed programming method. There are three parts in this section. The distributed Web application based on component GIS was described in part one. The optimization results without traffic constraint were presented in part two. The optimization results with traffic constraint were presented in part three. The contributions of this paper can be summarized as follows. (1) It proposed an effective approach for shortest paths in fire succor based on component GIS technology. This proposed approach can achieve the real-time decisions of shortest paths for fire succor. (2) It applied the ant colony optimization to implement the shortest path decision. The traffic information was considered in the shortest path decision using ant colony optimization. The final application instance suggests that the proposed approach is feasible, correct and valid.
Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.
Zaitsev, M; Steinhoff, S; Shah, N J
2003-06-01
A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.
Foddai, A C G; Grant, I R
2017-05-01
To validate an optimized peptide-mediated magnetic separation (PMS)-phage assay for detection of viable Mycobacterium avium subsp. paratuberculosis (MAP) in milk. Inclusivity, specificity and limit of detection 50% (LOD 50 ) of the optimized PMS-phage assay were assessed. Plaques were obtained for all 43 MAP strains tested. Of 12 other Mycobacterium sp. tested, only Mycobacterium bovis BCG produced small numbers of plaques. LOD 50 of the PMS-phage assay was 0·93 MAP cells per 50 ml milk, which was better than both PMS-qPCR and PMS-culture. When individual milks (n = 146) and bulk tank milk (BTM, n = 22) obtained from Johne's affected herds were tested by the PMS-phage assay, viable MAP were detected in 31 (21·2%) of 146 individual milks and 13 (59·1%) of 22 BTM, with MAP numbers detected ranging from 6-948 plaque-forming-units per 50 ml milk. PMS-qPCR and PMS-MGIT culture proved to be less sensitive tests than the PMS-phage assay. The optimized PMS-phage assay is the most sensitive and specific method available for the detection of viable MAP in milk. Further work is needed to streamline the PMS-phage assay, because the assay's multistep format currently makes it unsuitable for adoption by the dairy industry as a screening test. The inclusivity (ability to detect all MAP strains), specificity (ability to detect only MAP) and detection sensitivity (ability to detect low numbers of MAP) of the optimized PMS-phage assay have been comprehensively demonstrated for the first time. © 2017 The Society for Applied Microbiology.
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-02-11
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.
Fuel-Optimal Trajectories in a Planet-Moon Environment Using Multiple Gravity Assists
NASA Technical Reports Server (NTRS)
Ross, Shane D.; Grover, Piyush
2007-01-01
For low energy spacecraft trajectories such as multi-moon orbiters for the Jupiter system, multiple gravity assists by moons could be used in conjunction with ballistic capture to drastically decrease fuel usage. In this paper, we outline a procedure to obtain a family of zero-fuel multi-moon orbiter trajectories, using a family of Keplerian maps derived by the first author previously. The maps capture well the dynamics of the full equations of motion; the phase space contains a connected chaotic zone where intersections between unstable resonant orbit manifolds provide the template for lanes of fast migration between orbits of different semimajor axes. Patched three body approach is used and the four body problem is broken down into two three-body problems, and the search space is considerably reduced by the use of properties of the Keplerian maps. We also introduce the notion of Switching Region where the perturbations due to the two perturbing moons are of comparable strength, and which separates the domains of applicability of the corresponding two Keplerian maps.
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-01-01
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172
Lawrence, J R; Swerhone, G D W; Leppard, G G; Araki, T; Zhang, X; West, M M; Hitchcock, A P
2003-09-01
Confocal laser scanning microscopy (CLSM), transmission electron microscopy (TEM), and soft X-ray scanning transmission X-ray microscopy (STXM) were used to map the distribution of macromolecular subcomponents (e.g., polysaccharides, proteins, lipids, and nucleic acids) of biofilm cells and matrix. The biofilms were developed from river water supplemented with methanol, and although they comprised a complex microbial community, the biofilms were dominated by heterotrophic bacteria. TEM provided the highest-resolution structural imaging, CLSM provided detailed compositional information when used in conjunction with molecular probes, and STXM provided compositional mapping of macromolecule distributions without the addition of probes. By examining exactly the same region of a sample with combinations of these techniques (STXM with CLSM and STXM with TEM), we demonstrate that this combination of multimicroscopy analysis can be used to create a detailed correlative map of biofilm structure and composition. We are using these correlative techniques to improve our understanding of the biochemical basis for biofilm organization and to assist studies intended to investigate and optimize biofilms for environmental remediation applications.
Accurate atom-mapping computation for biochemical reactions.
Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D
2012-11-26
The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.
A real time QRS detection using delay-coordinate mapping for the microcontroller implementation.
Lee, Jeong-Whan; Kim, Kyeong-Seop; Lee, Bongsoo; Lee, Byungchae; Lee, Myoung-Ho
2002-01-01
In this article, we propose a new algorithm using the characteristics of reconstructed phase portraits by delay-coordinate mapping utilizing lag rotundity for a real-time detection of QRS complexes in ECG signals. In reconstructing phase portrait the mapping parameters, time delay, and mapping dimension play important roles in shaping of portraits drawn in a new dimensional space. Experimentally, the optimal mapping time delay for detection of QRS complexes turned out to be 20 ms. To explore the meaning of this time delay and the proper mapping dimension, we applied a fill factor, mutual information, and autocorrelation function algorithm that were generally used to analyze the chaotic characteristics of sampled signals. From these results, we could find the fact that the performance of our proposed algorithms relied mainly on the geometrical property such as an area of the reconstructed phase portrait. For the real application, we applied our algorithm for designing a small cardiac event recorder. This system was to record patients' ECG and R-R intervals for 1 h to investigate HRV characteristics of the patients who had vasovagal syncope symptom and for the evaluation, we implemented our algorithm in C language and applied to MIT/BIH arrhythmia database of 48 subjects. Our proposed algorithm achieved a 99.58% detection rate of QRS complexes.
Samanta, Rahul; Kumar, Saurabh; Chik, William; Qian, Pierre; Barry, Michael A; Al Raisi, Sara; Bhaskaran, Abhishek; Farraha, Melad; Nadri, Fazlur; Kizana, Eddy; Thiagalingam, Aravinda; Kovoor, Pramesh; Pouliopoulos, Jim
2017-10-01
Recent studies have demonstrated that intramyocardial adipose tissue (IMAT) may contribute to ventricular electrophysiological remodeling in patients with chronic myocardial infarction. Using an ovine model of myocardial infarction, we aimed to determine the influence of IMAT on scar tissue identification during endocardial contact mapping and optimal voltage-based mapping criteria for defining IMAT dense regions. In 7 sheep, left ventricular endocardial and transmural mapping was performed 84 weeks (15-111 weeks) post-myocardial infarction. Spearman rank correlation coefficient was used to assess the relationship between endocardial contact electrogram amplitude and histological composition of myocardium. Receiver operator characteristic curves were used to derive optimal electrogram thresholds for IMAT delineation during endocardial mapping and to describe the use of endocardial mapping for delineation of IMAT dense regions within scar. Endocardial electrogram amplitude correlated significantly with IMAT (unipolar r =-0.48±0.12, P <0.001; bipolar r =-0.45±0.22, P =0.04) but not collagen (unipolar r =-0.36±0.24, P =0.13; bipolar r =-0.43±0.31, P =0.16). IMAT dense regions of myocardium reliably identified using endocardial mapping with thresholds of <3.7 and <0.6 mV, respectively, for unipolar, bipolar, and combined modalities (single modality area under the curve=0.80, P <0.001; combined modality area under the curve=0.84, P <0.001). Unipolar mapping using optimal thresholding remained significantly reliable (area under the curve=0.76, P <0.001) during mapping of IMAT, confined to putative scar border zones (bipolar amplitude, 0.5-1.5 mV). These novel findings enhance our understanding of the confounding influence of IMAT on endocardial scar mapping. Combined bipolar and unipolar voltage mapping using optimal thresholds may be useful for delineating IMAT dense regions of myocardium, in postinfarct cardiomyopathy. © 2017 American Heart Association, Inc.
A new chaotic multi-verse optimization algorithm for solving engineering optimization problems
NASA Astrophysics Data System (ADS)
Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella
2018-03-01
Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.
Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
NASA Astrophysics Data System (ADS)
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.
A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery.
Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan
2006-05-21
At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubisztal, J., E-mail: julian.kubisztal@us.edu.pl
A new approach to numerical analysis of maps of material surface has been proposed and discussed in detail. It was concluded that the roughness factor RF and the root mean square roughness S{sub q} show a saturation effect with increasing size of the analysed maps what allows determining the optimal map dimension representative of the examined material. A quantitative method of determining predominant direction of the surface texture based on the power spectral density function is also proposed and discussed. The elaborated method was applied in surface analysis of Ni + Mo composite coatings. It was shown that co-deposition ofmore » molybdenum particles in nickel matrix leads to an increase in surface roughness. In addition, a decrease in size of the embedded Mo particles in Ni matrix causes an increase of both the surface roughness and the surface texture. It was also stated that the relation between the roughness factor and the double layer capacitance C{sub dl} of the studied coatings is linear and allows determining the double layer capacitance of the smooth nickel electrode. - Highlights: •Optimization of the procedure for the scanning of the material surface •Quantitative determination of the surface roughness and texture intensity •Proposition of the parameter describing privileged direction of the surface texture •Determination of the double layer capacitance of the smooth electrode.« less
Underwater Multi-Vehicle Trajectory Alignment and Mapping Using Acoustic and Optical Constraints
Campos, Ricard; Gracias, Nuno; Ridao, Pere
2016-01-01
Multi-robot formations are an important advance in recent robotic developments, as they allow a group of robots to merge their capacities and perform surveys in a more convenient way. With the aim of keeping the costs and acoustic communications to a minimum, cooperative navigation of multiple underwater vehicles is usually performed at the control level. In order to maintain the desired formation, individual robots just react to simple control directives extracted from range measurements or ultra-short baseline (USBL) systems. Thus, the robots are unaware of their global positioning, which presents a problem for the further processing of the collected data. The aim of this paper is two-fold. First, we present a global alignment method to correct the dead reckoning trajectories of multiple vehicles to resemble the paths followed during the mission using the acoustic messages passed between vehicles. Second, we focus on the optical mapping application of these types of formations and extend the optimization framework to allow for multi-vehicle geo-referenced optical 3D mapping using monocular cameras. The inclusion of optical constraints is not performed using the common bundle adjustment techniques, but in a form improving the computational efficiency of the resulting optimization problem and presenting a generic process to fuse optical reconstructions with navigation data. We show the performance of the proposed method on real datasets collected within the Morph EU-FP7 project. PMID:26999144
How trees allocate carbon for optimal growth: insight from a game-theoretic model.
Fu, Liyong; Sun, Lidan; Han, Hao; Jiang, Libo; Zhu, Sheng; Ye, Meixia; Tang, Shouzheng; Huang, Minren; Wu, Rongling
2017-02-01
How trees allocate photosynthetic products to primary height growth and secondary radial growth reflects their capacity to best use environmental resources. Despite substantial efforts to explore tree height-diameter relationship empirically and through theoretical modeling, our understanding of the biological mechanisms that govern this phenomenon is still limited. By thinking of stem woody biomass production as an ecological system of apical and lateral growth components, we implement game theory to model and discern how these two components cooperate symbiotically with each other or compete for resources to determine the size of a tree stem. This resulting allometry game theory is further embedded within a genetic mapping and association paradigm, allowing the genetic loci mediating the carbon allocation of stemwood growth to be characterized and mapped throughout the genome. Allometry game theory was validated by analyzing a mapping data of stem height and diameter growth over perennial seasons in a poplar tree. Several key quantitative trait loci were found to interpret the process and pattern of stemwood growth through regulating the ecological interactions of stem apical and lateral growth. The application of allometry game theory enables the prediction of the situations in which the cooperation, competition or altruism is an optimal decision of a tree to fully use the environmental resources it owns. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Structure of Optimum Interpolation Functions.
1983-02-01
Daniel F. Merriam, ed., Plenum Press, 1970. 2. Hiroshi Akima, "Comments on ’Optimal Contour Mapping Using Universal Kriging’ by Ricardo 0. Olea ," (with...Kriging," Mathematical Geology 14 (1982), 249-257. 21 27. Ricardo 0. Olea , "Optimal Contour Mapping Using Universal Kriging," J. of Geophysical Res. 79
JIGSAW: Joint Inhomogeneity estimation via Global Segment Assembly for Water-fat separation.
Lu, Wenmiao; Lu, Yi
2011-07-01
Water-fat separation in magnetic resonance imaging (MRI) is of great clinical importance, and the key to uniform water-fat separation lies in field map estimation. This work deals with three-point field map estimation, in which water and fat are modelled as two single-peak spectral lines, and field inhomogeneities shift the spectrum by an unknown amount. Due to the simplified spectrum modelling, there exists inherent ambiguity in forming field maps from multiple locally feasible field map values at each pixel. To resolve such ambiguity, spatial smoothness of field maps has been incorporated as a constraint of an optimization problem. However, there are two issues: the optimization problem is computationally intractable and even when it is solved exactly, it does not always separate water and fat images. Hence, robust field map estimation remains challenging in many clinically important imaging scenarios. This paper proposes a novel field map estimation technique called JIGSAW. It extends a loopy belief propagation (BP) algorithm to obtain an approximate solution to the optimization problem. The solution produces locally smooth segments and avoids error propagation associated with greedy methods. The locally smooth segments are then assembled into a globally consistent field map by exploiting the periodicity of the feasible field map values. In vivo results demonstrate that JIGSAW outperforms existing techniques and produces correct water-fat separation in challenging imaging scenarios.
Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.
Dastmalchi, Pouya; Veronis, Georgios
2013-12-30
We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.
Laba, M.; Tsai, F.; Ogurcak, D.; Smith, S.; Richmond, M.E.
2005-01-01
Mapping invasive plant species in aquatic and terrestrial ecosystems helps to understand the causes of their progression, manage some of their negative consequences, and control them. In recent years, a variety of new remote-sensing techniques, like Derivative Spectral Analysis (DSA) of hyperspectral data, have been developed to facilitate this mapping. A number of questions related to these techniques remain to be addressed. This article attempts to answer one of these questions: Is the application of DSA optimal at certain times of the year? Field radiometric data gathered weekly during the summer of 1999 at selected field sites in upstate New York, populated with purple loosestrife (Lythrum salicaria L.), common reed (Phragmites australis (Cav.)) and cattail (Typha L.) are analyzed using DSA to differentiate among plant community types. First, second and higher-order derivatives of the reflectance spectra of nine field plots, varying in plant composition, are calculated and analyzed in detail to identify spectral ranges in which one or more community types have distinguishing features. On the basis of the occurrence and extent of these spectral ranges, experimental observations suggest that a satisfactory differentiation among community types was feasible on 30 August, when plants experienced characteristic phenological changes (transition from flowers to seed heads). Generally, dates in August appear optimal from the point of view of species differentiability and could be selected for image acquisitions. This observation, as well as the methodology adopted in this article, should provide a firm basis for the acquisition of hyperspectral imagery and for mapping the targeted species over a broad range of spatial scales. ?? 2005 American Society for Photogrammetry and Remote Sensing.
Aggregation of LoD 1 building models as an optimization problem
NASA Astrophysics Data System (ADS)
Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.
3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.
Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa
2015-11-15
Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interest<100% of cycle length (CL); <95% tachycardia CL mapped; variability of CL and/or unstable fiducial reference marker; and suboptimal bar height and scar settings. A data collection and map interpretation algorithm has been developed to optimize Ripple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A case Study of Applying Object-Relational Persistence in Astronomy Data Archiving
NASA Astrophysics Data System (ADS)
Yao, S. S.; Hiriart, R.; Barg, I.; Warner, P.; Gasson, D.
2005-12-01
The NOAO Science Archive (NSA) team is developing a comprehensive domain model to capture the science data in the archive. Java and an object model derived from the domain model weil address the application layer of the archive system. However, since RDBMS is the best proven technology for data management, the challenge is the paradigm mismatch between the object and the relational models. Transparent object-relational mapping (ORM) persistence is a successful solution to this challenge. In the data modeling and persistence implementation of NSA, we are using Hibernate, a well-accepted ORM tool, to bridge the object model in the business tier and the relational model in the database tier. Thus, the database is isolated from the Java application. The application queries directly on objects using a DBMS-independent object-oriented query API, which frees the application developers from the low level JDBC and SQL so that they can focus on the domain logic. We present the detailed design of the NSA R3 (Release 3) data model and object-relational persistence, including mapping, retrieving and caching. Persistence layer optimization and performance tuning will be analyzed. The system is being built on J2EE, so the integration of Hibernate into the EJB container and the transaction management are also explored.
Application of a Fully Numerical Guidance to Mars Aerocapture
NASA Technical Reports Server (NTRS)
Matz, Daniel A.; Lu, Ping; Mendeck, Gavin F.; Sostaric, Ronald R.
2017-01-01
An advanced guidance algorithm, Fully Numerical Predictor-corrector Aerocapture Guidance (FNPAG), has been developed to perform aerocapture maneuvers in an optimal manner. It is a model-based, numerical guidance that benefits from requiring few adjustments across a variety of different hypersonic vehicle lift-to-drag ratios, ballistic co-efficients, and atmospheric entry conditions. In this paper, FNPAG is first applied to the Mars Rigid Vehicle (MRV) mid lift-to-drag ratio concept. Then the study is generalized to a design map of potential Mars aerocapture missions and vehicles, ranging from the scale and requirements of recent robotic to potential human and precursor missions. The design map results show the versatility of FNPAG and provide insight for the design of Mars aerocapture vehicles and atmospheric entry conditions to achieve desired performance.
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
Recent developments of artificial intelligence in drying of fresh food: A review.
Sun, Qing; Zhang, Min; Mujumdar, Arun S
2018-03-01
Intellectualization is an important direction of drying development and artificial intelligence (AI) technologies have been widely used to solve problems of nonlinear function approximation, pattern detection, data interpretation, optimization, simulation, diagnosis, control, data sorting, clustering, and noise reduction in different food drying technologies due to the advantages of self-learning ability, adaptive ability, strong fault tolerance and high degree robustness to map the nonlinear structures of arbitrarily complex and dynamic phenomena. This article presents a comprehensive review on intelligent drying technologies and their applications. The paper starts with the introduction of basic theoretical knowledge of ANN, fuzzy logic and expert system. Then, we summarize the AI application of modeling, predicting, and optimization of heat and mass transfer, thermodynamic performance parameters, and quality indicators as well as physiochemical properties of dried products in artificial biomimetic technology (electronic nose, computer vision) and different conventional drying technologies. Furthermore, opportunities and limitations of AI technique in drying are also outlined to provide more ideas for researchers in this area.
Network Hardware Virtualization for Application Provisioning in Core Networks
Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp; ...
2017-02-03
We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less
Direito, Artur; Walsh, Deirdre; Hinbarji, Moohamad; Albatal, Rami; Tooley, Mark; Whittaker, Robyn; Maddison, Ralph
2018-06-01
Few interventions to promote physical activity (PA) adapt dynamically to changes in individuals' behavior. Interventions targeting determinants of behavior are linked with increased effectiveness and should reflect changes in behavior over time. This article describes the application of two frameworks to assist the development of an adaptive evidence-based smartphone-delivered intervention aimed at influencing PA and sedentary behaviors (SB). Intervention mapping was used to identify the determinants influencing uptake of PA and optimal behavior change techniques (BCTs). Behavioral intervention technology was used to translate and operationalize the BCTs and its modes of delivery. The intervention was based on the integrated behavior change model, focused on nine determinants, consisted of 33 BCTs, and included three main components: (1) automated capture of daily PA and SB via an existing smartphone application, (2) classification of the individual into an activity profile according to their PA and SB, and (3) behavior change content delivery in a dynamic fashion via a proof-of-concept application. This article illustrates how two complementary frameworks can be used to guide the development of a mobile health behavior change program. This approach can guide the development of future mHealth programs.
Network Hardware Virtualization for Application Provisioning in Core Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gumaste, Ashwin; Das, Tamal; Khandwala, Kandarp
We present that service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built specifically to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a provider's business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization - orientedmore » equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. Finally, a simulation study validates our NV-induced model.« less
Design of refractive laser beam shapers to generate complex irradiance profiles
NASA Astrophysics Data System (ADS)
Li, Meijie; Meuret, Youri; Duerr, Fabian; Vervaeke, Michael; Thienpont, Hugo
2014-05-01
A Gaussian laser beam is reshaped to have specific irradiance distributions in many applications in order to ensure optimal system performance. Refractive optics are commonly used for laser beam shaping. A refractive laser beam shaper is typically formed by either two plano-aspheric lenses or by one thick lens with two aspherical surfaces. Ray mapping is a general optical design technique to design refractive beam shapers based on geometric optics. This design technique in principle allows to generate any rotational-symmetric irradiance profile, yet in literature ray mapping is mainly developed to transform a Gaussian irradiance profile to a uniform profile. For more complex profiles especially with low intensity in the inner region, like a Dark Hollow Gaussian (DHG) irradiance profile, ray mapping technique is not directly applicable in practice. In order to these complex profiles, the numerical effort of calculating the aspherical surface points and fitting a surface with sufficient accuracy increases considerably. In this work we evaluate different sampling approaches and surface fitting methods. This allows us to propose and demonstrate a comprehensive numerical approach to efficiently design refractive laser beam shapers to generate rotational-symmetric collimated beams with a complex irradiance profile. Ray tracing analysis for several complex irradiance profiles demonstrates excellent performance of the designed lenses and the versatility of our design procedure.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486
NASA Technical Reports Server (NTRS)
Troudet, Terry; Merrill, Walter C.
1990-01-01
The ability of feed-forward neural network architectures to learn continuous valued mappings in the presence of noise was demonstrated in relation to parameter identification and real-time adaptive control applications. An error function was introduced to help optimize parameter values such as number of training iterations, observation time, sampling rate, and scaling of the control signal. The learning performance depended essentially on the degree of embodiment of the control law in the training data set and on the degree of uniformity of the probability distribution function of the data that are presented to the net during sequence. When a control law was corrupted by noise, the fluctuations of the training data biased the probability distribution function of the training data sequence. Only if the noise contamination is minimized and the degree of embodiment of the control law is maximized, can a neural net develop a good representation of the mapping and be used as a neurocontroller. A multilayer net was trained with back-error-propagation to control a cart-pole system for linear and nonlinear control laws in the presence of data processing noise and measurement noise. The neurocontroller exhibited noise-filtering properties and was found to operate more smoothly than the teacher in the presence of measurement noise.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.
Quality changes of cuttlefish stored under various atmosphere modifications and vacuum packaging.
Bouletis, Achilleas D; Arvanitoyannis, Ioannis S; Hadjichristodoulou, Christos; Neofitou, Christos; Parlapani, Foteini F; Gkagtzis, Dimitrios C
2016-06-01
Seafood preservation and its shelf life prolongation are two of the main issues in the seafood industry. As a result, and in view of market globalization, research has been triggered in this direction by applying several techniques such as modified atmosphere packaging (MAP), vacuum packaging (VP) and active packaging (AP). However, seafood such as octopus, cuttlefish and others have not been thoroughly investigated up to now. The aim of this research was to determine the optimal conditions of modified atmosphere under which cuttlefish storage time and consequently shelf life time could be prolonged without endangering consumer safety. It was found that cuttlefish shelf life reached 2, 2, 4, 8 and 8 days for control, VP, MAP 1, MAP 2 and MAP 3 (20% CO2 -80% N2 , 50% CO2 -50% N2 and 70% CO2 -30% N2 for MAP 1, 2 and 3, respectively) samples, respectively, judging by their sensorial attributes. Elevated CO2 levels had a strong microbiostatic effect, whereas storage under vacuum did not offer significant advantages. All physicochemical attributes of MAP-treated samples were better preserved compared to control. Application of high CO2 atmospheres such as MAP 2 and MAP 3 proved to be an effective strategy toward preserving the characteristics and prolonging the shelf life of fresh cuttlefish and thereby improving its potential in the market. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Conrad, Erin C; Mossner, James M; Chou, Kelvin L; Patil, Parag G
2018-05-23
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) improves motor symptoms of Parkinson disease (PD). However, motor outcomes can be variable, perhaps due to inconsistent positioning of the active contact relative to an unknown optimal locus of stimulation. Here, we determine the optimal locus of STN stimulation in a geometrically unconstrained, mathematically precise, and atlas-independent manner, using Unified Parkinson Disease Rating Scale (UPDRS) motor outcomes and an electrophysiological neuronal stimulation model. In 20 patients with PD, we mapped motor improvement to active electrode location, relative to the individual, directly MRI-visualized STN. Our analysis included a novel, unconstrained and computational electrical-field model of neuronal activation to estimate the optimal locus of DBS. We mapped the optimal locus to a tightly defined ovoid region 0.49 mm lateral, 0.88 mm posterior, and 2.63 mm dorsal to the anatomical midpoint of the STN. On average, this locus is 11.75 lateral, 1.84 mm posterior, and 1.08 mm ventral to the mid-commissural point. Our novel, atlas-independent method reveals a single, ovoid optimal locus of stimulation in STN DBS for PD. The methodology, here applied to UPDRS and PD, is generalizable to atlas-independent mapping of other motor and non-motor effects of DBS. © 2018 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
NASA Technical Reports Server (NTRS)
Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.
1992-01-01
A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.
N, Sadhasivam; R, Balamurugan; M, Pandi
2018-01-27
Objective: Epigenetic modifications involving DNA methylation and histone statud are responsible for the stable maintenance of cellular phenotypes. Abnormalities may be causally involved in cancer development and therefore could have diagnostic potential. The field of epigenomics refers to all epigenetic modifications implicated in control of gene expression, with a focus on better understanding of human biology in both normal and pathological states. Epigenomics scientific workflow is essentially a data processing pipeline to automate the execution of various genome sequencing operations or tasks. Cloud platform is a popular computing platform for deploying large scale epigenomics scientific workflow. Its dynamic environment provides various resources to scientific users on a pay-per-use billing model. Scheduling epigenomics scientific workflow tasks is a complicated problem in cloud platform. We here focused on application of an improved particle swam optimization (IPSO) algorithm for this purpose. Methods: The IPSO algorithm was applied to find suitable resources and allocate epigenomics tasks so that the total cost was minimized for detection of epigenetic abnormalities of potential application for cancer diagnosis. Result: The results showed that IPSO based task to resource mapping reduced total cost by 6.83 percent as compared to the traditional PSO algorithm. Conclusion: The results for various cancer diagnosis tasks showed that IPSO based task to resource mapping can achieve better costs when compared to PSO based mapping for epigenomics scientific application workflow. Creative Commons Attribution License
Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald
2018-06-04
A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.
Radiometric calibration of wide-field camera system with an application in astronomy
NASA Astrophysics Data System (ADS)
Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika
2017-09-01
Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.
Nonlinear mapping of the luminance in dual-layer high dynamic range displays
NASA Astrophysics Data System (ADS)
Guarnieri, Gabriele; Ramponi, Giovanni; Bonfiglio, Silvio; Albani, Luigi
2009-02-01
It has long been known that the human visual system (HVS) has a nonlinear response to luminance. This nonlinearity can be quantified using the concept of just noticeable difference (JND), which represents the minimum amplitude of a specified test pattern an average observer can discern from a uniform background. The JND depends on the background luminance following a threshold versus intensity (TVI) function. It is possible to define a curve which maps physical luminances into a perceptually linearized domain. This mapping can be used to optimize a digital encoding, by minimizing the visibility of quantization noise. It is also commonly used in medical applications to display images adapting to the characteristics of the display device. High dynamic range (HDR) displays, which are beginning to appear on the market, can display luminance levels outside the range in which most standard mapping curves are defined. In particular, dual-layer LCD displays are able to extend the gamut of luminance offered by conventional liquid crystals towards the black region; in such areas suitable and HVS-compliant luminance transformations need to be determined. In this paper we propose a method, which is primarily targeted to the extension of the DICOM curve used in medical imaging, but also has a more general application. The method can be modified in order to compensate for the ambient light, which can be significantly greater than the black level of an HDR display and consequently reduce the visibility of the details in dark areas.
Ou-Yang, Si-sheng; Lu, Jun-yan; Kong, Xiang-qian; Liang, Zhong-jie; Luo, Cheng; Jiang, Hualiang
2012-01-01
Computational drug discovery is an effective strategy for accelerating and economizing drug discovery and development process. Because of the dramatic increase in the availability of biological macromolecule and small molecule information, the applicability of computational drug discovery has been extended and broadly applied to nearly every stage in the drug discovery and development workflow, including target identification and validation, lead discovery and optimization and preclinical tests. Over the past decades, computational drug discovery methods such as molecular docking, pharmacophore modeling and mapping, de novo design, molecular similarity calculation and sequence-based virtual screening have been greatly improved. In this review, we present an overview of these important computational methods, platforms and successful applications in this field. PMID:22922346
The Earth Science Research Network as Seen Through Network Analysis of the AGU
NASA Astrophysics Data System (ADS)
Narock, T.; Hasnain, S.; Stephan, R.
2017-12-01
Scientometrics is the science of science. Scientometric research includes measurements of impact, mapping of scientific fields, and the production of indicators for use in policy and management. We have leveraged network analysis in a scientometric study of the American Geophysical Union (AGU). Data from the AGU's Linked Data Abstract Browser was used to create a visualization and analytics tools to explore the Earth science's research network. Our application applies network theory to look at network structure within the various AGU sections, identify key individuals and communities related to Earth science topics, and examine multi-disciplinary collaboration across sections. Opportunities to optimize Earth science output, as well as policy and outreach applications, are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franklin, M.L.; Kittelson, D.B.; Leuer, R.H.
1996-10-01
A two-dimensional optimization process, which simultaneously adjusts the spark timing and equivalence ratio of a lean-burn, natural gas, Hercules G1600 engine, has been demonstrated. First, the three-dimensional surface of thermal efficiency was mapped versus spark timing and equivalence ratio at a single speed and load combination. Then the ability of the control system to find and hold the combination of timing and equivalence ratio that gives the highest thermal efficiency was explored. NO{sub x}, CO, and HC maps were also constructed from the experimental data to determine the tradeoffs between efficiency and emissions. The optimization process adds small synchronous disturbancesmore » to the spark timing and air flow while the fuel injected per cycle is held constant for four cycles. The engine speed response to these disturbances is used to determine the corrections for spark timing and equivalence ratio. The control process, in effect, uses the engine itself as the primary sensor. The control system can adapt to changes in fuel composition, operating conditions, engine wear, or other factors that may not be easily measured. Although this strategy was previously demonstrated in a Volkswagen 1.7 liter light duty engine (Frankling et al., 1994b), until now it has not been demonstrated in a heavy-duty engine. This paper covers the application of the approach to a Hercules G1600 engine.« less
Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure
NASA Astrophysics Data System (ADS)
Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.
2014-12-01
Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?
What is missing? An operational inundation mapping framework by SAR data
NASA Astrophysics Data System (ADS)
Shen, X.; Anagnostou, E. N.; Zeng, Z.; Kettner, A.; Hong, Y.
2017-12-01
Compared to optical sensors, synthetic aperture radar (SAR) works all-day all-weather. In addition, its spatial resolution does not decrease with the height of the platform and is thus applicable to a range of important studies. However, existing studies did not address the operational demands of real-time inundation mapping. The direct proof is that no water body product exists for any SAR-based satellites. Then what is missing between science and products? Automation and quality. What makes it so difficult to develop an operational inundation mapping technique based on SAR data? Spectrum-wise, unlike optical water indices such as MNDWI, AWEI etc., where a relative constant threshold may apply across acquisition of images, regions and sensors, the threshold to separate water from non-water pixels in each SAR images has to be individually chosen. The optimization of the threshold is the first obstacle to the automation of the SAR data algorithm. Morphologically, the quality and reliability of the results have been compromised by over-detection caused by smooth surface and shadowing area, the noise-like speckle and under-detection caused by strong-scatter disturbance. In this study, we propose a three-step framework that addresses all aforementioned issues of operational inundation mapping by SAR data. The framework consists of 1) optimization of Wishart distribution parameters of single/dual/fully-polarized SAR data, 2) morphological removal of over-detection, and 3) machine-learning based removal of under-detection. The framework utilizes not only the SAR data, but also the synergy of digital elevation model (DEM), and optical sensor-based products of fine resolution, including the water probability map, land cover classification map (optional), and river width. The framework has been validated throughout multiple areas in different parts of the world using different satellite SAR data and globally available ancillary data products. Therefore, it has the potential to contribute as an operational inundation mapping algorithm to any SAR missions, such as SWOT, ALOS, Sentinel, etc. Selected results using ALOS/PALSAR-1 L-band dual polarized data around the Connecticut River is provided in the attached Figure.
Fast approximate delivery of fluence maps for IMRT and VMAT
NASA Astrophysics Data System (ADS)
Balvert, Marleen; Craft, David
2017-02-01
In this article we provide a method to generate the trade-off between delivery time and fluence map matching quality for dynamically delivered fluence maps. At the heart of our method lies a mathematical programming model that, for a given duration of delivery, optimizes leaf trajectories and dose rates such that the desired fluence map is reproduced as well as possible. We begin with the single fluence map case and then generalize the model and the solution technique to the delivery of sequential fluence maps. The resulting large-scale, non-convex optimization problem was solved using a heuristic approach. We test our method using a prostate case and a head and neck case, and present the resulting trade-off curves. Analysis of the leaf trajectories reveals that short time plans have larger leaf openings in general than longer delivery time plans. Our method allows one to explore the continuum of possibilities between coarse, large segment plans characteristic of direct aperture approaches and narrow field plans produced by sliding window approaches. Exposing this trade-off will allow for an informed choice between plan quality and solution time. Further research is required to speed up the optimization process to make this method clinically implementable.
Influence maximization in complex networks through optimal percolation
NASA Astrophysics Data System (ADS)
Morone, Flaviano; Makse, Hernán A.
2015-08-01
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.
Influence maximization in complex networks through optimal percolation.
Morone, Flaviano; Makse, Hernán A
2015-08-06
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deffet, S; Macq, B; Farace, P
2016-06-15
Purpose: The conversion from Hounsfield units (HU) to stopping powers is a major source of range uncertainty in proton therapy (PT). Our contribution shows how proton radiographs (PR) acquired with a multi-layer ionization chamber in a PT center can be used for accurate patient positioning and subsequently for patient-specific optimization of the conversion from HU to stopping powers. Methods: A multi-layer ionization chamber was used to measure the integral depth-dose (IDD) of 220 MeV pencil beam spots passing through several anthropomorphic phantoms. The whole area of interest was imaged by repositioning the couch and by acquiring a 45×45 mm{sup 2}more » frame for each position. A rigid registration algorithm was implemented to correct the positioning error between the proton radiographs and the planning CT. After registration, the stopping power map obtained from the planning CT with the calibration curve of the treatment planning system was used together with the water equivalent thickness gained from two proton radiographs to generate a phantom-specific stopping power map. Results: Our results show that it is possible to make a registration with submillimeter accuracy from proton radiography obtained by sending beamlets separated by more than 1 mm. This was made possible by the complex shape of the IDD due to the presence of lateral heterogeneities along the path of the beam. Submillimeter positioning was still possible with a 5 mm spot spacing. Phantom specific stopping power maps obtained by minimizing the range error were cross-verified by the acquisition of an additional proton radiography where the phantom was positioned in a random but known manner. Conclusion: Our results indicate that a CT-PR registration algorithm together with range-error based optimization can be used to produce a patient-specific stopping power map. Sylvain Deffet reports financial funding of its PhD thesis by Ion Beam Applications (IBA) during the confines of the study and outside the submitted work. Francois Vander Stappen reports being employed by Ion Beam Applications (IBA) during the confines of the study and outside the submitted work.« less
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.
Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail; Volpi, Michele; Copa, Loris
2010-05-01
The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of MNO problem: 1) hierarchical top-down clustering in an input space in order to remove redundancy when data are clustered, and 2) a general method (independent on classifier) which gives posterior probabilities that can be used to define the classifier confidence and corresponding proposals for new measurement points. The basic ideas and procedures are explained by applying simulated data sets. The real case study deals with the analysis and mapping of soil types, which is a multi-class classification problem. Maps of soil types are important for the analysis and 3D modeling of heavy metals migration in soil and prediction risk mapping. The results obtained demonstrate the high quality of SVM mapping and efficiency of monitoring network optimization by using active learning approaches. The research was partly supported by SNSF projects No. 200021-126505 and 200020-121835.
2014-06-01
Speed xiii TEK Total Energy Compensated TSP traveling salesman problem UAV unmanned aerial vehicle UDP user datagram protocol UKF unscented...discretized map, and use the map to optimally solve the navigation task. The optimal navigation solution utilizes the well-known “ travelling salesman problem ...2 C. FORMULATION OF THE PROBLEM .................................................. 3 D
NASA Astrophysics Data System (ADS)
He, Yaoyao; Yang, Shanlin; Xu, Qifa
2013-07-01
In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.
Interactions between Flight Dynamics and Propulsion Systems of Air-Breathing Hypersonic Vehicles
NASA Astrophysics Data System (ADS)
Dalle, Derek J.
The development and application of a first-principles-derived reduced-order model called MASIV (Michigan/AFRL Scramjet In Vehicle) for an air-breathing hypersonic vehicle is discussed. Several significant and previously unreported aspects of hypersonic flight are investigated. A fortunate coupling between increasing Mach number and decreasing angle of attack is shown to extend the range of operating conditions for a class of supersonic inlets. Detailed maps of isolator unstart and ram-to-scram transition are shown on the flight corridor map for the first time. In scram mode the airflow remains supersonic throughout the engine, while in ram mode there is a region of subsonic flow. Accurately predicting the transition between these two modes requires models for complex shock interactions, finite-rate chemistry, fuel-air mixing, pre-combustion shock trains, and thermal choking, which are incorporated into a unified framework here. Isolator unstart occurs when the pre-combustion shock train is longer than the isolator, which blocks airflow from entering the engine. Finally, cooptimization of the vehicle design and trajectory is discussed. An optimal control technique is introduced that greatly reduces the number of computations required to optimize the simulated trajectory.
Minati, Ludovico; Cercignani, Mara; Chan, Dennis
2013-10-01
Graph theory-based analyses of brain network topology can be used to model the spatiotemporal correlations in neural activity detected through fMRI, and such approaches have wide-ranging potential, from detection of alterations in preclinical Alzheimer's disease through to command identification in brain-machine interfaces. However, due to prohibitive computational costs, graph-based analyses to date have principally focused on measuring connection density rather than mapping the topological architecture in full by exhaustive shortest-path determination. This paper outlines a solution to this problem through parallel implementation of Dijkstra's algorithm in programmable logic. The processor design is optimized for large, sparse graphs and provided in full as synthesizable VHDL code. An acceleration factor between 15 and 18 is obtained on a representative resting-state fMRI dataset, and maps of Euclidean path length reveal the anticipated heterogeneous cortical involvement in long-range integrative processing. These results enable high-resolution geodesic connectivity mapping for resting-state fMRI in patient populations and real-time geodesic mapping to support identification of imagined actions for fMRI-based brain-machine interfaces. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Young, Kelsey E.; Evans, C. A.; Hodges, K. V.
2012-01-01
While traditional geologic mapping includes the examination of structural relationships between rock units in the field, more advanced technology now enables us to simultaneously collect and combine analytical datasets with field observations. Information about tectonomagmatic processes can be gleaned from these combined data products. Historically, construction of multi-layered field maps that include sample data has been accomplished serially (first map and collect samples, analyze samples, combine data, and finally, readjust maps and conclusions about geologic history based on combined data sets). New instruments that can be used in the field, such as a handheld xray fluorescence (XRF) unit, are now available. Targeted use of such instruments enables geologists to collect preliminary geochemical data while in the field so that they can optimize scientific data return from each field traverse. Our study tests the application of this technology and projects the benefits gained by real-time geochemical data in the field. The integrated data set produces a richer geologic map and facilitates a stronger contextual picture for field geologists when collecting field observations and samples for future laboratory work. Real-time geochemical data on samples also provide valuable insight regarding sampling decisions by the field geologist
Coolen, Bram F; Poot, Dirk H J; Liem, Madieke I; Smits, Loek P; Gao, Shan; Kotek, Gyula; Klein, Stefan; Nederveen, Aart J
2016-03-01
A novel three-dimensional (3D) T1 and T2 mapping protocol for the carotid artery is presented. A 3D black-blood imaging sequence was adapted allowing carotid T1 and T2 mapping using multiple flip angles and echo time (TE) preparation times. B1 mapping was performed to correct for spatially varying deviations from the nominal flip angle. The protocol was optimized using simulations and phantom experiments. In vivo scans were performed on six healthy volunteers in two sessions, and in a patient with advanced atherosclerosis. Compensation for patient motion was achieved by 3D registration of the inter/intrasession scans. Subsequently, T1 and T2 maps were obtained by maximum likelihood estimation. Simulations and phantom experiments showed that the bias in T1 and T2 estimation was < 10% within the range of physiological values. In vivo T1 and T2 values for carotid vessel wall were 844 ± 96 and 39 ± 5 ms, with good repeatability across scans. Patient data revealed altered T1 and T2 values in regions of atherosclerotic plaque. The 3D T1 and T2 mapping of the carotid artery is feasible using variable flip angle and variable TE preparation acquisitions. We foresee application of this technique for plaque characterization and monitoring plaque progression in atherosclerotic patients. © 2015 Wiley Periodicals, Inc.
Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.
2012-01-01
We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Sensmeier, mark D.; Stewart, Bret A.
2006-01-01
Algorithms for rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process have been developed. Application of these algorithms should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. Recent enhancements to this approach include the porting of the algorithms to a platform-independent software language Python, and modifications to specifically consider morphing aircraft-type configurations. Two sample cases which illustrate these recent developments are presented.
The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit
NASA Technical Reports Server (NTRS)
Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete;
1998-01-01
Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.
Applying graph partitioning methods in measurement-based dynamic load balancing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha
Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
Choi, K; Suh, T; Xing, L
2012-06-01
Newly available flattening filter free (FFF) beam increases the dose rate by 3∼6 times at the central axis. In reality, even flattening filtered beam is not perfectly flat. In addition, the beam profiles across different fields may not have the same amplitude. The existing inverse planning formalism based on the total-variation of intensity (or fluence) map cannot consider these properties of beam profiles. The purpose of this work is to develop a novel dose optimization scheme with incorporation of the inherent beam profiles to maximally utilize the efficacy of arbitrary beam profiles while preserving the convexity of the optimization problem. To increase the accuracy of the problem formalism, we decompose the fluence map as an elementwise multiplication of the inherent beam profile and a normalized transmission map (NTM). Instead of attempting to optimize the fluence maps directly, we optimize the NTMs and beam profiles separately. A least-squares problem constrained by total-variation of NTMs is developed to derive the optimal fluence maps that balances the dose conformality and FFF beam delivery efficiency. With the resultant NTMs, we find beam profiles to renormalized NTMs. The proposed method iteratively optimizes and renormalizes NTMs in a closed loop manner. The advantage of the proposed method is demonstrated by using a head-neck case with flat beam profiles and a prostate case with non-flat beam profiles. The obtained NTMs achieve more conformal dose distribution while preserving piecewise constancy compared to the existing solution. The proposed formalism has two major advantages over the conventional inverse planning schemes: (1) it provides a unified framework for inverse planning with beams of arbitrary fluence profiles, including treatment with beams of mixed fluence profiles; (2) the use of total-variation constraints on NTMs allows us to optimally balance the dose confromality and deliverability for a given beam configuration. This project was supported in part by grants from the National Science Foundation (0854492), National Cancer Institute (1R01 CA104205), and Leading Foreign Research Institute Recruitment Program by the Korean Ministry of Education, Science and Technology (K20901000001-09E0100-00110). To the authors' best knowledgement, there is no conflict interest. © 2012 American Association of Physicists in Medicine.
Towards natural language question generation for the validation of ontologies and mappings.
Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos
2016-08-08
The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.
NASA Astrophysics Data System (ADS)
Kuang, Simeng Max
This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.
Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie
2018-05-18
Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.
Genome Maps, a new generation genome browser.
Medina, Ignacio; Salavert, Francisco; Sanchez, Rubén; de Maria, Alejandro; Alonso, Roberto; Escobar, Pablo; Bleda, Marta; Dopazo, Joaquín
2013-07-01
Genome browsers have gained importance as more genomes and related genomic information become available. However, the increase of information brought about by new generation sequencing technologies is, at the same time, causing a subtle but continuous decrease in the efficiency of conventional genome browsers. Here, we present Genome Maps, a genome browser that implements an innovative model of data transfer and management. The program uses highly efficient technologies from the new HTML5 standard, such as scalable vector graphics, that optimize workloads at both server and client sides and ensure future scalability. Thus, data management and representation are entirely carried out by the browser, without the need of any Java Applet, Flash or other plug-in technology installation. Relevant biological data on genes, transcripts, exons, regulatory features, single-nucleotide polymorphisms, karyotype and so forth, are imported from web services and are available as tracks. In addition, several DAS servers are already included in Genome Maps. As a novelty, this web-based genome browser allows the local upload of huge genomic data files (e.g. VCF or BAM) that can be dynamically visualized in real time at the client side, thus facilitating the management of medical data affected by privacy restrictions. Finally, Genome Maps can easily be integrated in any web application by including only a few lines of code. Genome Maps is an open source collaborative initiative available in the GitHub repository (https://github.com/compbio-bigdata-viz/genome-maps). Genome Maps is available at: http://www.genomemaps.org.
Genome Maps, a new generation genome browser
Medina, Ignacio; Salavert, Francisco; Sanchez, Rubén; de Maria, Alejandro; Alonso, Roberto; Escobar, Pablo; Bleda, Marta; Dopazo, Joaquín
2013-01-01
Genome browsers have gained importance as more genomes and related genomic information become available. However, the increase of information brought about by new generation sequencing technologies is, at the same time, causing a subtle but continuous decrease in the efficiency of conventional genome browsers. Here, we present Genome Maps, a genome browser that implements an innovative model of data transfer and management. The program uses highly efficient technologies from the new HTML5 standard, such as scalable vector graphics, that optimize workloads at both server and client sides and ensure future scalability. Thus, data management and representation are entirely carried out by the browser, without the need of any Java Applet, Flash or other plug-in technology installation. Relevant biological data on genes, transcripts, exons, regulatory features, single-nucleotide polymorphisms, karyotype and so forth, are imported from web services and are available as tracks. In addition, several DAS servers are already included in Genome Maps. As a novelty, this web-based genome browser allows the local upload of huge genomic data files (e.g. VCF or BAM) that can be dynamically visualized in real time at the client side, thus facilitating the management of medical data affected by privacy restrictions. Finally, Genome Maps can easily be integrated in any web application by including only a few lines of code. Genome Maps is an open source collaborative initiative available in the GitHub repository (https://github.com/compbio-bigdata-viz/genome-maps). Genome Maps is available at: http://www.genomemaps.org. PMID:23748955
2002-12-01
sections of formalin-fixed guinea pig brains using different MAP-2 monoclonal antibodies. Brain sections were boiled in sodium citrate, citric acid...citric acid solution at pH 6.0 is the optimal microwave-assisted AR method for immunolabeling MAP-2 in formalin-fixed, paraffin-processed guinea pig brain...studies on archival guinea pig brain paraffin blocks, ultimately relaxing the use of additional animals to evaluate changes in MAP-2 expression between chemical warfare nerve agent-treated and control samples.
Feltus, F Alex
2014-06-01
Understanding the control of any trait optimally requires the detection of causal genes, gene interaction, and mechanism of action to discover and model the biochemical pathways underlying the expressed phenotype. Functional genomics techniques, including RNA expression profiling via microarray and high-throughput DNA sequencing, allow for the precise genome localization of biological information. Powerful genetic approaches, including quantitative trait locus (QTL) and genome-wide association study mapping, link phenotype with genome positions, yet genetics is less precise in localizing the relevant mechanistic information encoded in DNA. The coupling of salient functional genomic signals with genetically mapped positions is an appealing approach to discover meaningful gene-phenotype relationships. Techniques used to define this genetic-genomic convergence comprise the field of systems genetics. This short review will address an application of systems genetics where RNA profiles are associated with genetically mapped genome positions of individual genes (eQTL mapping) or as gene sets (co-expression network modules). Both approaches can be applied for knowledge independent selection of candidate genes (and possible control mechanisms) underlying complex traits where multiple, likely unlinked, genomic regions might control specific complex traits. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Camera pose estimation for augmented reality in a small indoor dynamic scene
NASA Astrophysics Data System (ADS)
Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad
2017-09-01
Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.
D Mapping of Cultural Heritage: Special Problems and best Practices in Extreme Case-Studies
NASA Astrophysics Data System (ADS)
Patias, P.; Kaimaris, D.; Georgiadis, Ch.; Stamnas, A.; Antoniadis, D.; Papadimitrakis, D.
2013-07-01
Photogrammetrey has a long successful history in the area of 3D modelling and documentation of cultural heritage monuments. In some cases an extensive study, preparation and the application of novel solutions is required for the successful documentation and 3D modelling of monuments. In most of the cases the problem that we have to face is difficulties regarding accessing, photographing, and measuring the monument from the optimal distance, in combination with the need for a high spatial resolution mapping. This paper is highlighting the special problems and the novel solutions, performed during mapping of two significant cultural heritage monuments in Greece. The Roussanou monastery (1527-1529 A.C., Meteora, Center Greece) and its underlying rock, had to be photographed and measured from a far distance and measured with various spatial resolutions. In the lakeside Neolithic settlement of Dispilio (6.000 B.C., western Greece) the enclosure which is covered with vegetation above a height of 3 m, had to be measured with high spatial resolution. The combined use of a laser scanner, a digital camera equipped with a telephoto lens and UAV allowed the successful mapping and the production of orthophotomaps in each case.
Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.
Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng
2013-11-01
The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.
Coherence area profiling in multi-spatial-mode squeezed states
Lawrie, Benjamin J.; Pooser, Raphael C.; Otterstrom, Nils T.
2015-09-12
The presence of multiple bipartite entangled modes in squeezed states generated by four-wave mixing enables ultra-trace sensing, imaging, and metrology applications that are impossible to achieve with single-spatial-mode squeezed states. For Gaussian seed beams, the spatial distribution of these bipartite entangled modes, or coherence areas, across each beam is largely dependent on the spatial modes present in the pump beam, but it has proven difficult to map the distribution of these coherence areas in frequency and space. We demonstrate an accessible method to map the distribution of the coherence areas within these twin beams. In addition, we also show thatmore » the pump shape can impart different noise properties to each coherence area, and that it is possible to select and detect coherence areas with optimal squeezing with this approach.« less
Optimizing Wind Power Generation while Minimizing Wildlife Impacts in an Urban Area
Bohrer, Gil; Zhu, Kunpeng; Jones, Robert L.; Curtis, Peter S.
2013-01-01
The location of a wind turbine is critical to its power output, which is strongly affected by the local wind field. Turbine operators typically seek locations with the best wind at the lowest level above ground since turbine height affects installation costs. In many urban applications, such as small-scale turbines owned by local communities or organizations, turbine placement is challenging because of limited available space and because the turbine often must be added without removing existing infrastructure, including buildings and trees. The need to minimize turbine hazard to wildlife compounds the challenge. We used an exclusion zone approach for turbine-placement optimization that incorporates spatially detailed maps of wind distribution and wildlife densities with power output predictions for the Ohio State University campus. We processed public GIS records and airborne lidar point-cloud data to develop a 3D map of all campus buildings and trees. High resolution large-eddy simulations and long-term wind climatology were combined to provide land-surface-affected 3D wind fields and the corresponding wind-power generation potential. This power prediction map was then combined with bird survey data. Our assessment predicts that exclusion of areas where bird numbers are highest will have modest effects on the availability of locations for power generation. The exclusion zone approach allows the incorporation of wildlife hazard in wind turbine siting and power output considerations in complex urban environments even when the quantitative interaction between wildlife behavior and turbine activity is unknown. PMID:23409117
Average variograms to guide soil sampling
NASA Astrophysics Data System (ADS)
Kerry, R.; Oliver, M. A.
2004-10-01
To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.
Optimizing wind power generation while minimizing wildlife impacts in an urban area.
Bohrer, Gil; Zhu, Kunpeng; Jones, Robert L; Curtis, Peter S
2013-01-01
The location of a wind turbine is critical to its power output, which is strongly affected by the local wind field. Turbine operators typically seek locations with the best wind at the lowest level above ground since turbine height affects installation costs. In many urban applications, such as small-scale turbines owned by local communities or organizations, turbine placement is challenging because of limited available space and because the turbine often must be added without removing existing infrastructure, including buildings and trees. The need to minimize turbine hazard to wildlife compounds the challenge. We used an exclusion zone approach for turbine-placement optimization that incorporates spatially detailed maps of wind distribution and wildlife densities with power output predictions for the Ohio State University campus. We processed public GIS records and airborne lidar point-cloud data to develop a 3D map of all campus buildings and trees. High resolution large-eddy simulations and long-term wind climatology were combined to provide land-surface-affected 3D wind fields and the corresponding wind-power generation potential. This power prediction map was then combined with bird survey data. Our assessment predicts that exclusion of areas where bird numbers are highest will have modest effects on the availability of locations for power generation. The exclusion zone approach allows the incorporation of wildlife hazard in wind turbine siting and power output considerations in complex urban environments even when the quantitative interaction between wildlife behavior and turbine activity is unknown.
PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
Probabilistic Cross-identification in Crowded Fields as an Assignment Problem
NASA Astrophysics Data System (ADS)
Budavári, Tamás; Basu, Amitabh
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
Fernandez, Michael; Abreu, Jose I; Shi, Hongqing; Barnard, Amanda S
2016-11-14
The possibility of band gap engineering in graphene opens countless new opportunities for application in nanoelectronics. In this work, the energy gaps of 622 computationally optimized graphene nanoflakes were mapped to topological autocorrelation vectors using machine learning techniques. Machine learning modeling revealed that the most relevant correlations appear at topological distances in the range of 1 to 42 with prediction accuracy higher than 80%. The data-driven model can statistically discriminate between graphene nanoflakes with different energy gaps on the basis of their molecular topology.
Using Machine Learning Techniques in the Analysis of Oceanographic Data
NASA Astrophysics Data System (ADS)
Falcinelli, K. E.; Abuomar, S.
2017-12-01
Acoustic Doppler Current Profilers (ADCPs) are oceanographic tools capable of collecting large amounts of current profile data. Using unsupervised machine learning techniques such as principal component analysis, fuzzy c-means clustering, and self-organizing maps, patterns and trends in an ADCP dataset are found. Cluster validity algorithms such as visual assessment of cluster tendency and clustering index are used to determine the optimal number of clusters in the ADCP dataset. These techniques prove to be useful in analysis of ADCP data and demonstrate potential for future use in other oceanographic applications.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Optimal Path Planning Program for Autonomous Speed Sprayer in Orchard Using Order-Picking Algorithm
NASA Astrophysics Data System (ADS)
Park, T. S.; Park, S. J.; Hwang, K. Y.; Cho, S. I.
This study was conducted to develop a software program which computes optimal path for autonomous navigation in orchard, especially for speed sprayer. Possibilities of autonomous navigation in orchard were shown by other researches which have minimized distance error between planned path and performed path. But, research of planning an optimal path for speed sprayer in orchard is hardly founded. In this study, a digital map and a database for orchard which contains GPS coordinate information (coordinates of trees and boundary of orchard) and entity information (heights and widths of trees, radius of main stem of trees, disease of trees) was designed. An orderpicking algorithm which has been used for management of warehouse was used to calculate optimum path based on the digital map. Database for digital map was created by using Microsoft Access and graphic interface for database was made by using Microsoft Visual C++ 6.0. It was possible to search and display information about boundary of an orchard, locations of trees, daily plan for scattering chemicals and plan optimal path on different orchard based on digital map, on each circumstance (starting speed sprayer in different location, scattering chemicals for only selected trees).
Mars Mission Optimization Based on Collocation of Resources
NASA Technical Reports Server (NTRS)
Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.
2003-01-01
This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.
NASA Astrophysics Data System (ADS)
Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.
2013-09-01
The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its clustering in 2D.
Noise pollution mapping approach and accuracy on landscape scales.
Iglesias Merchan, Carlos; Diaz-Balteiro, Luis
2013-04-01
Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.
Xiang, X D
Combinatorial materials synthesis methods and high-throughput evaluation techniques have been developed to accelerate the process of materials discovery and optimization and phase-diagram mapping. Analogous to integrated circuit chips, integrated materials chips containing thousands of discrete different compositions or continuous phase diagrams, often in the form of high-quality epitaxial thin films, can be fabricated and screened for interesting properties. Microspot x-ray method, various optical measurement techniques, and a novel evanescent microwave microscope have been used to characterize the structural, optical, magnetic, and electrical properties of samples on the materials chips. These techniques are routinely used to discover/optimize and map phase diagrams of ferroelectric, dielectric, optical, magnetic, and superconducting materials.
ERIC Educational Resources Information Center
McMillin, Bill; Gibson, Sally; MacDonald, Jean
2016-01-01
Animated maps of the library stacks were integrated into the catalog interface at Pratt Institute and into the EBSCO Discovery Service interface at Illinois State University. The mapping feature was developed for optimal automation of the update process to enable a range of library personnel to update maps and call-number ranges. The development…
Optimal feedback strategies for pursuit-evasion and interception in a plane
NASA Technical Reports Server (NTRS)
Rajan, N.; Ardema, M. D.
1983-01-01
Variable-speed pursuit-evasion and interception for two aircraft moving in a horizontal plane are analyzed in terms of a coordinate frame fixed in the plane at termination. Each participant's optimal motion can be represented by extremal trajectory maps. These maps are used to discuss sub-optimal approximations that are independent of the other participant. A method of constructing sections of the barrier, dispersal, and control-level surfaces and thus determining feedback strategies is described. Some examples are shown for pursuit-evasion and the minimum-time interception of a straight-flying target.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Eye aberration analysis with Zernike polynomials
NASA Astrophysics Data System (ADS)
Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.
1998-06-01
New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; George, Thomas; Tarbell, Mark A.
2007-04-01
Robotic reconnaissance operations are called for in extreme environments, not only those such as space, including planetary atmospheres, surfaces, and subsurfaces, but also in potentially hazardous or inaccessible operational areas on Earth, such as mine fields, battlefield environments, enemy occupied territories, terrorist infiltrated environments, or areas that have been exposed to biochemical agents or radiation. Real time reconnaissance enables the identification and characterization of transient events. A fundamentally new mission concept for tier-scalable reconnaissance of operational areas, originated by Fink et al., is aimed at replacing the engineering and safety constrained mission designs of the past. The tier-scalable paradigm integrates multi-tier (orbit atmosphere surface/subsurface) and multi-agent (satellite UAV/blimp surface/subsurface sensing platforms) hierarchical mission architectures, introducing not only mission redundancy and safety, but also enabling and optimizing intelligent, less constrained, and distributed reconnaissance in real time. Given the mass, size, and power constraints faced by such a multi-platform approach, this is an ideal application scenario for a diverse set of MEMS sensors. To support such mission architectures, a high degree of operational autonomy is required. Essential elements of such operational autonomy are: (1) automatic mapping of an operational area from different vantage points (including vehicle health monitoring); (2) automatic feature extraction and target/region-of-interest identification within the mapped operational area; and (3) automatic target prioritization for close-up examination. These requirements imply the optimal deployment of MEMS sensors and sensor platforms, sensor fusion, and sensor interoperability.
NASA Astrophysics Data System (ADS)
Cipriani, L.; Fantini, F.; Bertacchi, S.
2014-06-01
Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.
Determination of MLC model parameters for Monaco using commercial diode arrays.
Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian
2016-07-08
Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors
Large-scale parallel genome assembler over cloud computing environment.
Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong
2017-06-01
The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.
NASA Astrophysics Data System (ADS)
Murrieta Mendoza, Alejandro
Aircraft reference trajectory is an alternative method to reduce fuel consumption, thus the pollution released to the atmosphere. Fuel consumption reduction is of special importance for two reasons: first, because the aeronautical industry is responsible of 2% of the CO2 released to the atmosphere, and second, because it will reduce the flight cost. The aircraft fuel model was obtained from a numerical performance database which was created and validated by our industrial partner from flight experimental test data. A new methodology using the numerical database was proposed in this thesis to compute the fuel burn for a given trajectory. Weather parameters such as wind and temperature were taken into account as they have an important effect in fuel burn. The open source model used to obtain the weather forecast was provided by Weather Canada. A combination of linear and bi-linear interpolations allowed finding the required weather data. The search space was modelled using different graphs: one graph was used for mapping the different flight phases such as climb, cruise and descent, and another graph was used for mapping the physical space in which the aircraft would perform its flight. The trajectory was optimized in its vertical reference trajectory using the Beam Search algorithm, and a combination of the Beam Search algorithm with a search space reduction technique. The trajectory was optimized simultaneously for the vertical and lateral reference navigation plans while fulfilling a Required Time of Arrival constraint using three different metaheuristic algorithms: the artificial bee's colony, and the ant colony optimization. Results were validated using the software FlightSIMRTM, a commercial Flight Management System, an exhaustive search algorithm, and as flown flights obtained from flightawareRTM. All algorithms were able to reduce the fuel burn, and the flight costs. None None None None None None None
Exploring the quantum speed limit with computer games
NASA Astrophysics Data System (ADS)
Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.
2016-04-01
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Exploring the quantum speed limit with computer games.
Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F
2016-04-14
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Cartesian control of redundant robots
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.
1989-01-01
A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.
NASA Technical Reports Server (NTRS)
Leong, Harrison Monfook
1988-01-01
General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.
Aerodynamic shape optimization of wing and wing-body configurations using control theory
NASA Technical Reports Server (NTRS)
Reuther, James; Jameson, Antony
1995-01-01
This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.
A tutorial in displaying mass spectrometry-based proteomic data using heat maps.
Key, Melissa
2012-01-01
Data visualization plays a critical role in interpreting experimental results of proteomic experiments. Heat maps are particularly useful for this task, as they allow us to find quantitative patterns across proteins and biological samples simultaneously. The quality of a heat map can be vastly improved by understanding the options available to display and organize the data in the heat map. This tutorial illustrates how to optimize heat maps for proteomics data by incorporating known characteristics of the data into the image. First, the concepts used to guide the creating of heat maps are demonstrated. Then, these concepts are applied to two types of analysis: visualizing spectral features across biological samples, and presenting the results of tests of statistical significance. For all examples we provide details of computer code in the open-source statistical programming language R, which can be used for biologists and clinicians with little statistical background. Heat maps are a useful tool for presenting quantitative proteomic data organized in a matrix format. Understanding and optimizing the parameters used to create the heat map can vastly improve both the appearance and the interoperation of heat map data.
A multi-plate velocity-map imaging design for high-resolution photoelectron spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kregel, Steven J.; Thurston, Glen K.; Zhou, Jia
A velocity map imaging (VMI) setup consisting of multiple electrodes with three adjustable voltage parameters, designed for slow electron velocity map imaging applications, is presented. The motivations for this design are discussed in terms of parameters that influence the VMI resolution and functionality. Particularly, this VMI has two tunable potentials used to adjust for optimal focus, yielding good VMI focus across a relatively large energy range. It also allows for larger interaction volumes without significant sacrifice to the resolution via a smaller electric gradient at the interaction region. All the electrodes in this VMI have the same dimensions for practicalitymore » and flexibility, allowing for relatively easy modifications to suit different experimental needs. We have coupled this VMI to a cryogenic ion trap mass spectrometer that has a flexible source design. The performance is demonstrated with the photoelectron spectra of S- and CS 2 -. The latter has a long vibrational progression in the ground state, and the temperature dependence of the vibronic features is probed by changing the temperature of the ion trap.« less
A multi-plate velocity-map imaging design for high-resolution photoelectron spectroscopy
Kregel, Steven J.; Thurston, Glen K.; Zhou, Jia; ...
2017-09-01
A velocity map imaging (VMI) setup consisting of multiple electrodes with three adjustable voltage parameters, designed for slow electron velocity map imaging applications, is presented. The motivations for this design are discussed in terms of parameters that influence the VMI resolution and functionality. Particularly, this VMI has two tunable potentials used to adjust for optimal focus, yielding good VMI focus across a relatively large energy range. It also allows for larger interaction volumes without significant sacrifice to the resolution via a smaller electric gradient at the interaction region. All the electrodes in this VMI have the same dimensions for practicalitymore » and flexibility, allowing for relatively easy modifications to suit different experimental needs. We have coupled this VMI to a cryogenic ion trap mass spectrometer that has a flexible source design. The performance is demonstrated with the photoelectron spectra of S- and CS 2 -. The latter has a long vibrational progression in the ground state, and the temperature dependence of the vibronic features is probed by changing the temperature of the ion trap.« less
Nasrallah, Fatima A; Lee, Eugene L Q; Chuang, Kai-Hsiang
2012-11-01
Arterial spin labeling (ASL) MRI provides a noninvasive method to image perfusion, and has been applied to map neural activation in the brain. Although pulsed labeling methods have been widely used in humans, continuous ASL with a dedicated neck labeling coil is still the preferred method in rodent brain functional MRI (fMRI) to maximize the sensitivity and allow multislice acquisition. However, the additional hardware is not readily available and hence its application is limited. In this study, flow-sensitive alternating inversion recovery (FAIR) pulsed ASL was optimized for fMRI of rat brain. A practical challenge of FAIR is the suboptimal global inversion by the transmit coil of limited dimensions, which results in low effective labeling. By using a large volume transmit coil and proper positioning to optimize the body coverage, the perfusion signal was increased by 38.3% compared with positioning the brain at the isocenter. An additional 53.3% gain in signal was achieved using optimized repetition and inversion times compared with a long TR. Under electrical stimulation to the forepaws, a perfusion activation signal change of 63.7 ± 6.3% can be reliably detected in the primary somatosensory cortices using single slice or multislice echo planar imaging at 9.4 T. This demonstrates the potential of using pulsed ASL for multislice perfusion fMRI in functional and pharmacological applications in rat brain. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Range image registration based on hash map and moth-flame optimization
NASA Astrophysics Data System (ADS)
Zou, Li; Ge, Baozhen; Chen, Lei
2018-03-01
Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.
Optimizing the Usability of Brain-Computer Interfaces.
Zhang, Yin; Chase, Steve M
2018-05-01
Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics
Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less
Mapping Sensory Spots for Moderate Temperatures on the Back of Hand.
Yang, Fan; Chen, Guixu; Zhou, Sikai; Han, Danhong; Xu, Jingjing; Xu, Shengyong
2017-12-04
Thermosensation with thermoreceptors plays an important role in maintaining body temperature at an optimal state and avoiding potential damage caused by harmful hot or cold environmental temperatures. In this work, the locations of sensory spots for sensing moderate temperatures of 40-50 °C on the back of the hands of young Chinese people were mapped in a blind-test manner with a thermal probe of 1.0 mm spatial resolution. The number of sensory spots increased along with the testing temperature; however, the surface density of sensory spots was remarkably lower than those reported previously. The locations of the spots were irregularly distributed and subject-dependent. Even for the same subject, the number and location of sensory spots were unbalanced and asymmetric between the left and right hands. The results may offer valuable information for designing artificial electronic skin and wearable devices, as well as for clinical applications.
NASA Astrophysics Data System (ADS)
Meyer, Quentin; Ronaszegi, Krisztian; Pei-June, Gan; Curnick, Oliver; Ashton, Sean; Reisch, Tobias; Adcock, Paul; Shearing, Paul R.; Brett, Daniel J. L.
2015-09-01
Selecting the ideal operating point for a fuel cell depends on the application and consequent trade-off between efficiency, power density and various operating considerations. A systematic methodology for determining the optimal operating point for fuel cells is lacking; there is also the need for a single-value metric to describe and compare fuel cell performance. This work shows how the 'current of lowest resistance' can be accurately measured using electrochemical impedance spectroscopy and used as a useful metric of fuel cell performance. This, along with other measures, is then used to generate an 'electro-thermal performance map' of fuel cell operation. A commercial air-cooled open-cathode fuel cell is used to demonstrate how the approach can be used; in this case leading to the identification of the optimum operating temperature of ∼45 °C.
SAD-Based Stereo Matching Using FPGAs
NASA Astrophysics Data System (ADS)
Ambrosch, Kristian; Humenberger, Martin; Kubinger, Wilfried; Steininger, Andreas
In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel's Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.
Finding theory- and evidence-based alternatives to fear appeals: Intervention Mapping
Kok, Gerjo; Bartholomew, L Kay; Parcel, Guy S; Gottlieb, Nell H; Fernández, María E
2014-01-01
Fear arousal—vividly showing people the negative health consequences of life-endangering behaviors—is popular as a method to raise awareness of risk behaviors and to change them into health-promoting behaviors. However, most data suggest that, under conditions of low efficacy, the resulting reaction will be defensive. Instead of applying fear appeals, health promoters should identify effective alternatives to fear arousal by carefully developing theory- and evidence-based programs. The Intervention Mapping (IM) protocol helps program planners to optimize chances for effectiveness. IM describes the intervention development process in six steps: (1) assessing the problem and community capacities, (2) specifying program objectives, (3) selecting theory-based intervention methods and practical applications, (4) designing and organizing the program, (5) planning, adoption, and implementation, and (6) developing an evaluation plan. Authors who used IM indicated that it helped in bringing the development of interventions to a higher level. PMID:24811880
Comparing cosmic web classifiers using information theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin
We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Ourmore » study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.« less
ADME-Space: a new tool for medicinal chemists to explore ADME properties.
Bocci, Giovanni; Carosati, Emanuele; Vayer, Philippe; Arrault, Alban; Lozano, Sylvain; Cruciani, Gabriele
2017-07-25
We introduce a new chemical space for drugs and drug-like molecules, exclusively based on their in silico ADME behaviour. This ADME-Space is based on self-organizing map (SOM) applied to 26,000 molecules. Twenty accurate QSPR models, describing important ADME properties, were developed and, successively, used as new molecular descriptors not related to molecular structure. Applications include permeability, active transport, metabolism and bioavailability studies, but the method can be even used to discuss drug-drug interactions (DDIs) or it can be extended to additional ADME properties. Thus, the ADME-Space opens a new framework for the multi-parametric data analysis in drug discovery where all ADME behaviours of molecules are condensed in one map: it allows medicinal chemists to simultaneously monitor several ADME properties, to rapidly select optimal ADME profiles, retrieve warning on potential ADME problems and DDIs or select proper in vitro experiments.
Gu, Yingxin; Howard, Daniel M.; Wylie, Bruce K.; Zhang, Li
2012-01-01
Flux tower networks (e. g., AmeriFlux, Agriflux) provide continuous observations of ecosystem exchanges of carbon (e. g., net ecosystem exchange), water vapor (e. g., evapotranspiration), and energy between terrestrial ecosystems and the atmosphere. The long-term time series of flux tower data are essential for studying and understanding terrestrial carbon cycles, ecosystem services, and climate changes. Currently, there are 13 flux towers located within the Great Plains (GP). The towers are sparsely distributed and do not adequately represent the varieties of vegetation cover types, climate conditions, and geophysical and biophysical conditions in the GP. This study assessed how well the available flux towers represent the environmental conditions or "ecological envelopes" across the GP and identified optimal locations for future flux towers in the GP. Regression-based remote sensing and weather-driven net ecosystem production (NEP) models derived from different extrapolation ranges (10 and 50%) were used to identify areas where ecological conditions were poorly represented by the flux tower sites and years previously used for mapping grassland fluxes. The optimal lands suitable for future flux towers within the GP were mapped. Results from this study provide information to optimize the usefulness of future flux towers in the GP and serve as a proxy for the uncertainty of the NEP map.
Lee, J K; Perin, J; Parkinson, C; O'Connor, M; Gilmore, M M; Reyes, M; Armstrong, J; Jennings, J M; Northington, F J; Chavez-Valdez, R
2017-08-01
We studied whether cerebral blood pressure autoregulation and kidney and liver injuries are associated in neonatal encephalopathy (NE). We monitored autoregulation of 75 newborns who received hypothermia for NE in the neonatal intensive care unit to identify the mean arterial blood pressure with optimized autoregulation (MAP OPT ). Autoregulation parameters and creatinine, aspartate aminotransferase (AST) and alanine aminotransferase (ALT) were analyzed using adjusted regression models. Greater time with blood pressure within MAP OPT during hypothermia was associated with lower creatinine in girls. Blood pressure below MAP OPT related to higher ALT and AST during normothermia in all neonates and boys. The opposite occurred in rewarming when more time with blood pressure above MAP OPT related to higher AST. Blood pressures that optimize cerebral autoregulation may support the kidneys. Blood pressures below MAP OPT and liver injury during normothermia are associated. The relationship between MAP OPT and AST during rewarming requires further study.
Optimized MLAA for quantitative non-TOF PET/MR of the brain
NASA Astrophysics Data System (ADS)
Benoit, Didier; Ladefoged, Claes N.; Rezaei, Ahmadreza; Keller, Sune H.; Andersen, Flemming L.; Højgaard, Liselotte; Hansen, Adam E.; Holm, Søren; Nuyts, Johan
2016-12-01
For quantitative tracer distribution in positron emission tomography, attenuation correction is essential. In a hybrid PET/CT system the CT images serve as a basis for generation of the attenuation map, but in PET/MR, the MR images do not have a similarly simple relationship with the attenuation map. Hence attenuation correction in PET/MR systems is more challenging. Typically either of two MR sequences are used: the Dixon or the ultra-short time echo (UTE) techniques. However these sequences have some well-known limitations. In this study, a reconstruction technique based on a modified and optimized non-TOF MLAA is proposed for PET/MR brain imaging. The idea is to tune the parameters of the MLTR applying some information from an attenuation image computed from the UTE sequences and a T1w MR image. In this MLTR algorithm, an {αj} parameter is introduced and optimized in order to drive the algorithm to a final attenuation map most consistent with the emission data. Because the non-TOF MLAA is used, a technique to reduce the cross-talk effect is proposed. In this study, the proposed algorithm is compared to the common reconstruction methods such as OSEM using a CT attenuation map, considered as the reference, and OSEM using the Dixon and UTE attenuation maps. To show the robustness and the reproducibility of the proposed algorithm, a set of 204 [18F]FDG patients, 35 [11C]PiB patients and 1 [18F]FET patient are used. The results show that by choosing an optimized value of {αj} in MLTR, the proposed algorithm improves the results compared to the standard MR-based attenuation correction methods (i.e. OSEM using the Dixon or the UTE attenuation maps), and the cross-talk and the scale problem are limited.
CRISM Multispectral and Hyperspectral Mapping Data - A Global Data Set for Hydrated Mineral Mapping
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Hash, C. D.; Murchie, S. L.; Lim, H.
2017-12-01
The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a visible through short-wave infrared hyperspectral imaging spectrometer (VNIR S-detector: 364-1055 nm; IR L-detector: 1001-3936 nm; 6.55 nm sampling) that has been in operation on the Mars Reconnaissance Orbiter (MRO) since 2006. Over the course of the MRO mission, CRISM has acquired 290,000 individual mapping observation segments (mapping strips) with a variety of observing modes and data characteristics (VNIR/IR; 100/200 m/pxl; multi-/hyper-spectral band selection) over a wide range of observing conditions (atmospheric state, observation geometry, instrument state). CRISM mapping data coverage density varies primarily with latitude and secondarily due to seasonal and operational considerations. The aggregate global IR mapping data coverage currently stands at 85% ( 80% at the equator with 40% repeat sampling), which is sufficient spatial sampling density to support the assembly of empirically optimized radiometrically consistent mapping mosaic products. The CRISM project has defined a number of mapping mosaic data products (e.g. Multispectral Reduced Data Record (MRDR) map tiles) with varying degrees of observation-specific processing and correction applied prior to mosaic assembly. A commonality among the mosaic products is the presence of inter-observation radiometric discrepancies which are traceable to variable observation circumstances or associated atmospheric/photometric correction residuals. The empirical approach to radiometric reconciliation leverages inter-observation spatial overlaps and proximal relationships to construct a graph that encodes the mosaic structure and radiometric discrepancies. The graph theory abstraction allows the underling structure of the msaic to be evaluated and the corresponding optimization problem configured so it is well-posed. Linear and non-linear least squares optimization is then employed to derive a set of observation- and wavelength- specific model parameters for a series of transform functions that minimize the total radiometric discrepancy across the mosaic. This empirical approach to CRISM data radiometric reconciliation and the utility of the resulting mapping data mosaic products for hydrated mineral mapping will be presented.
Hori, Daijiro; Hogue, Charles; Adachi, Hideo; Max, Laura; Price, Joel; Sciortino, Christopher; Zehr, Kenton; Conte, John; Cameron, Duke; Mandal, Kaushik
2016-04-01
Perioperative blood pressure management by targeting individualized optimal blood pressure, determined by cerebral blood flow autoregulation monitoring, may ensure sufficient renal perfusion. The purpose of this study was to evaluate changes in the optimal blood pressure for individual patients, determined during cardiopulmonary bypass (CPB) and during early postoperative period in intensive care unit (ICU). A secondary aim was to examine if excursions below optimal blood pressure in the ICU are associated with risk of cardiac surgery-associated acute kidney injury (CSA-AKI). One hundred and ten patients undergoing cardiac surgery had cerebral blood flow monitored with a novel technology using ultrasound tagged near infrared spectroscopy (UT-NIRS) during CPB and in the first 3 h after surgery in the ICU. The correlation flow index (CFx) was calculated as a moving, linear correlation coefficient between cerebral flow index measured using UT-NIRS and mean arterial pressure (MAP). Optimal blood pressure was defined as the MAP with the lowest CFx. Changes in optimal blood pressure in the perioperative period were observed and the association of blood pressure excursions (magnitude and duration) below the optimal blood pressure [area under the curve (AUC) < OptMAP mmHgxh] with incidence of CSA-AKI (defined using Kidney Disease: Improving Global Outcomes criteria) was examined. Optimal blood pressure during early ICU stay and CPB was correlated (r = 0.46, P < 0.0001), but was significantly higher in the ICU compared with during CPB (75 ± 8.7 vs 71 ± 10.3 mmHg, P = 0.0002). Thirty patients (27.3%) developed CSA-AKI within 48 h after the surgery. AUC < OptMAP was associated with CSA-AKI during CPB [median, 13.27 mmHgxh, interquartile range (IQR), 4.63-20.14 vs median, 6.05 mmHgxh, IQR 3.03-12.40, P = 0.008], and in the ICU (13.72 mmHgxh, IQR 5.09-25.54 vs 5.65 mmHgxh, IQR 1.71-13.07, P = 0.022). Optimal blood pressure during CPB and in the ICU was correlated. Excursions below optimal blood pressure (AUC < OptMAP mmHgXh) during perioperative period are associated with CSA-AKI. Individualized blood pressure management based on cerebral autoregulation monitoring during the perioperative period may help improve CSA-AKI-related outcomes. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Hori, Daijiro; Hogue, Charles; Adachi, Hideo; Max, Laura; Price, Joel; Sciortino, Christopher; Zehr, Kenton; Conte, John; Cameron, Duke; Mandal, Kaushik
2016-01-01
OBJECTIVES Perioperative blood pressure management by targeting individualized optimal blood pressure, determined by cerebral blood flow autoregulation monitoring, may ensure sufficient renal perfusion. The purpose of this study was to evaluate changes in the optimal blood pressure for individual patients, determined during cardiopulmonary bypass (CPB) and during early postoperative period in intensive care unit (ICU). A secondary aim was to examine if excursions below optimal blood pressure in the ICU are associated with risk of cardiac surgery-associated acute kidney injury (CSA-AKI). METHODS One hundred and ten patients undergoing cardiac surgery had cerebral blood flow monitored with a novel technology using ultrasound tagged near infrared spectroscopy (UT-NIRS) during CPB and in the first 3 h after surgery in the ICU. The correlation flow index (CFx) was calculated as a moving, linear correlation coefficient between cerebral flow index measured using UT-NIRS and mean arterial pressure (MAP). Optimal blood pressure was defined as the MAP with the lowest CFx. Changes in optimal blood pressure in the perioperative period were observed and the association of blood pressure excursions (magnitude and duration) below the optimal blood pressure [area under the curve (AUC) < OptMAP mmHgxh] with incidence of CSA-AKI (defined using Kidney Disease: Improving Global Outcomes criteria) was examined. RESULTS Optimal blood pressure during early ICU stay and CPB was correlated (r = 0.46, P < 0.0001), but was significantly higher in the ICU compared with during CPB (75 ± 8.7 vs 71 ± 10.3 mmHg, P = 0.0002). Thirty patients (27.3%) developed CSA-AKI within 48 h after the surgery. AUC < OptMAP was associated with CSA-AKI during CPB [median, 13.27 mmHgxh, interquartile range (IQR), 4.63–20.14 vs median, 6.05 mmHgxh, IQR 3.03–12.40, P = 0.008], and in the ICU (13.72 mmHgxh, IQR 5.09–25.54 vs 5.65 mmHgxh, IQR 1.71–13.07, P = 0.022). CONCLUSIONS Optimal blood pressure during CPB and in the ICU was correlated. Excursions below optimal blood pressure (AUC < OptMAP mmHgXh) during perioperative period are associated with CSA-AKI. Individualized blood pressure management based on cerebral autoregulation monitoring during the perioperative period may help improve CSA-AKI-related outcomes. PMID:26763042
Martian resource locations: Identification and optimization
NASA Astrophysics Data System (ADS)
Chamitoff, Gregory; James, George; Barker, Donald; Dershowitz, Adam
2005-04-01
The identification and utilization of in situ Martian natural resources is the key to enable cost-effective long-duration missions and permanent human settlements on Mars. This paper presents a powerful software tool for analyzing Martian data from all sources, and for optimizing mission site selection based on resource collocation. This program, called Planetary Resource Optimization and Mapping Tool (PROMT), provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in situ resource utilization. Preliminary optimization results are shown for a number of mission scenarios.
Efficient search, mapping, and optimization of multi-protein genetic systems in diverse bacteria
Farasat, Iman; Kushwaha, Manish; Collens, Jason; Easterbrook, Michael; Guido, Matthew; Salis, Howard M
2014-01-01
Developing predictive models of multi-protein genetic systems to understand and optimize their behavior remains a combinatorial challenge, particularly when measurement throughput is limited. We developed a computational approach to build predictive models and identify optimal sequences and expression levels, while circumventing combinatorial explosion. Maximally informative genetic system variants were first designed by the RBS Library Calculator, an algorithm to design sequences for efficiently searching a multi-protein expression space across a > 10,000-fold range with tailored search parameters and well-predicted translation rates. We validated the algorithm's predictions by characterizing 646 genetic system variants, encoded in plasmids and genomes, expressed in six gram-positive and gram-negative bacterial hosts. We then combined the search algorithm with system-level kinetic modeling, requiring the construction and characterization of 73 variants to build a sequence-expression-activity map (SEAMAP) for a biosynthesis pathway. Using model predictions, we designed and characterized 47 additional pathway variants to navigate its activity space, find optimal expression regions with desired activity response curves, and relieve rate-limiting steps in metabolism. Creating sequence-expression-activity maps accelerates the optimization of many protein systems and allows previous measurements to quantitatively inform future designs. PMID:24952589
Multimodal Registration of White Matter Brain Data via Optimal Mass Transport.
Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L; Kikinis, Ron; Tannenbaum, Allen
2008-09-01
The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A . Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets.
Multimodal Registration of White Matter Brain Data via Optimal Mass Transport
Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M.; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L.; Kikinis, Ron; Tannenbaum, Allen
2017-01-01
The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets. PMID:28626844
Design Insights for MapReduce from Diverse Production Workloads
2012-01-25
different industries [5]. Consequently, there is a need to develop systematic knowledge of MapRe- duce behavior at both established users within technol...relevant to MapReduce-like systems that combine data movements and computation. 5.2 Task granularity Many MapReduce workload management mechanisms make...ex- ecutes the jobs given, versus what the jobs actually are. MapReduce workload managers currently optimize exe- cution scheduling and placement
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
NASA Astrophysics Data System (ADS)
Pásztor, László; Laborczi, Annamária; Szatmári, Gábor; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Dobos, Endre
2014-05-01
Due to the former soil surveys and mapping activities significant amount of soil information has accumulated in Hungary. Present soil data requirements are mainly fulfilled with these available datasets either by their direct usage or after certain specific and generally fortuitous, thematic and/or spatial inference. Due to the more and more frequently emerging discrepancies between the available and the expected data, there might be notable imperfection as for the accuracy and reliability of the delivered products. With a recently started project (DOSoReMI.hu; Digital, Optimized, Soil Related Maps and Information in Hungary) we would like to significantly extend the potential, how countrywide soil information requirements could be satisfied in Hungary. We started to compile digital soil related maps which fulfil optimally the national and international demands from points of view of thematic, spatial and temporal accuracy. The spatial resolution of the targeted countrywide, digital, thematic maps is at least 1:50.000 (approx. 50-100 meter raster resolution). DOSoReMI.hu results are also planned to contribute to the European part of GSM.net products. In addition to the auxiliary, spatial data themes related to soil forming factors and/or to indicative environmental elements we heavily lean on the various national soil databases. The set of the applied digital soil mapping techniques is gradually broadened incorporating and eventually integrating geostatistical, data mining and GIS tools. In our paper we will present the first results. - Regression kriging (RK) has been used for the spatial inference of certain quantitative data, like particle size distribution components, rootable depth and organic matter content. In the course of RK-based mapping spatially segmented categorical information provided by the SMUs of Digital Kreybig Soil Information System (DKSIS) has been also used in the form of indicator variables. - Classification and regression trees (CART) were used to improve the spatial resolution of category-type soil maps (thematic downscaling), like genetic soil type and soil productivity maps. The approach was justified by the fact that certain thematic soil maps are not available in the required scale. Decision trees were applied for the understanding of the soil-landscape models involved in existing soil maps, and for the post-formalization of survey/compilation rules. The relationships identified and expressed in decision rules made the creation of spatially refined maps possible with the aid of high resolution environmental auxiliary variables. Among these co-variables, a special role was played by larger scale spatial soil information with diverse attributes. As a next step, the testing of random forests for the same purposes has been started. - Due to the simultaneous richness of available Hungarian legacy soil data, spatial inference methods and auxiliary environmental information, there is a high versatility of possible approaches for the compilation of a given soil (related) map. This suggests the opportunity of optimization. For the creation of an object specific soil (related) map with predefined parameters (resolution, accuracy, reliability etc.) one might intend to identify the optimum set of soil data, method and auxiliary co-variables optimized for the resources (data costs, computation requirements etc.). The first findings on the inclusion and joint usage of spatial soil data as well as on the consistency of various evaluations of the result maps will be also presented. Acknowledgement: Our work has been supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).
NASA Astrophysics Data System (ADS)
Wang, H. B.; Li, J. W.; Zhou, B.; Yuan, Z. Q.; Chen, Y. P.
2013-03-01
In the last few decades, the development of Geographical Information Systems (GIS) technology has provided a method for the evaluation of landslide susceptibility and hazard. Slope units were found to be appropriate for the fundamental morphological elements in landslide susceptibility evaluation. Following the DEM construction in a loess area susceptible to landslides, the direct-reverse DEM technology was employed to generate 216 slope units in the studied area. After a detailed investigation, the landslide inventory was mapped in which 39 landslides, including paleo-landslides, old landslides and recent landslides, were present. Of the 216 slope units, 123 involved landslides. To analyze the mechanism of these landslides, six environmental factors were selected to evaluate landslide occurrence: slope angle, aspect, the height and shape of the slope, distance to river and human activities. These factors were extracted in terms of the slope unit within the ArcGIS software. The spatial analysis demonstrates that most of the landslides are located on convex slopes at an elevation of 100-150 m with slope angles from 135°-225° and 40°-60°. Landslide occurrence was then checked according to these environmental factors using an artificial neural network with back propagation, optimized by genetic algorithms. A dataset of 120 slope units was chosen for training the neural network model, i.e., 80 units with landslide presence and 40 units without landslide presence. The parameters of genetic algorithms and neural networks were then set: population size of 100, crossover probability of 0.65, mutation probability of 0.01, momentum factor of 0.60, learning rate of 0.7, max learning number of 10 000, and target error of 0.000001. After training on the datasets, the susceptibility of landslides was mapped for the land-use plan and hazard mitigation. Comparing the susceptibility map with landslide inventory, it was noted that the prediction accuracy of landslide occurrence is 93.02%, whereas units without landslide occurrence are predicted with an accuracy of 81.13%. To sum up, the verification shows satisfactory agreement with an accuracy of 86.46% between the susceptibility map and the landslide locations. In the landslide susceptibility assessment, ten new slopes were predicted to show potential for failure, which can be confirmed by the engineering geological conditions of these slopes. It was also observed that some disadvantages could be overcome in the application of the neural networks with back propagation, for example, the low convergence rate and local minimum, after the network was optimized using genetic algorithms. To conclude, neural networks with back propagation that are optimized by genetic algorithms are an effective method to predict landslide susceptibility with high accuracy.
Davidsson, Marcus; Diaz-Fernandez, Paula; Schwich, Oliver D.; Torroba, Marcos; Wang, Gang; Björklund, Tomas
2016-01-01
Detailed characterization and mapping of oligonucleotide function in vivo is generally a very time consuming effort that only allows for hypothesis driven subsampling of the full sequence to be analysed. Recent advances in deep sequencing together with highly efficient parallel oligonucleotide synthesis and cloning techniques have, however, opened up for entirely new ways to map genetic function in vivo. Here we present a novel, optimized protocol for the generation of universally applicable, barcode labelled, plasmid libraries. The libraries are designed to enable the production of viral vector preparations assessing coding or non-coding RNA function in vivo. When generating high diversity libraries, it is a challenge to achieve efficient cloning, unambiguous barcoding and detailed characterization using low-cost sequencing technologies. With the presented protocol, diversity of above 3 million uniquely barcoded adeno-associated viral (AAV) plasmids can be achieved in a single reaction through a process achievable in any molecular biology laboratory. This approach opens up for a multitude of in vivo assessments from the evaluation of enhancer and promoter regions to the optimization of genome editing. The generated plasmid libraries are also useful for validation of sequencing clustering algorithms and we here validate the newly presented message passing clustering process named Starcode. PMID:27874090
Accelerating Commercial Remote Sensing
NASA Technical Reports Server (NTRS)
1995-01-01
Through the Visiting Investigator Program (VIP) at Stennis Space Center, Community Coffee was able to use satellites to forecast coffee crops in Guatemala. Using satellite imagery, the company can produce detailed maps that separate coffee cropland from wild vegetation and show information on the health of specific crops. The data can control coffee prices and eventually may be used to optimize application of fertilizers, pesticides and irrigation. This would result in maximal crop yields, minimal pollution and lower production costs. VIP is a mechanism involving NASA funding designed to accelerate the growth of commercial remote sensing by promoting general awareness and basic training in the technology.
Design and scheduling for periodic concurrent error detection and recovery in processor arrays
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent
1992-01-01
Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.
NASA Technical Reports Server (NTRS)
Gopalan, Arun; Zubko, Viktor; Leptoukh, Gregory G.
2008-01-01
We look at issues, barriers and approaches for Data Fusion of satellite aerosol data as available from the GES DISC GIOVANNI Web Service. Daily Global Maps of AOT from a single satellite sensor alone contain gaps that arise due to various sources (sun glint regions, clouds, orbital swath gaps at low latitudes, bright underlying surfaces etc.). The goal is to develop a fast, accurate and efficient method to improve the spatial coverage of the Daily AOT data to facilitate comparisons with Global Models. Data Fusion may be supplemented by Optimal Interpolation (OI) as needed.
Be-safe travel, a web-based geographic application to explore safe-route in an area
NASA Astrophysics Data System (ADS)
Utamima, Amalia; Djunaidy, Arif
2017-08-01
In large cities in developing countries, the various forms of criminality are often found. For instance, the most prominent crimes in Surabaya, Indonesia is 3C, that is theft with violence (curas), theft by weighting (curat), and motor vehicle theft (curanmor). 3C case most often occurs on the highway and residential areas. Therefore, new entrants in an area should be aware of these kind of crimes. Route Planners System or route planning system such as Google Maps only consider the shortest distance in the calculation of the optimal route. The selection of the optimal path in this study not only consider the shortest distance, but also involves other factors, namely the security level. This research considers at the need for an application to recommend the safest road to be passed by the vehicle passengers while drive an area. This research propose Be-Safe Travel, a web-based application using Google API that can be accessed by people who like to drive in an area, but still lack of knowledge of the pathways which are safe from crime. Be-Safe Travel is not only useful for the new entrants, but also useful for delivery courier of valuables goods to go through the safest streets.
NASA Astrophysics Data System (ADS)
Pasqualotto, Nieves; Delegido, Jesús; Van Wittenberghe, Shari; Verrelst, Jochem; Rivera, Juan Pablo; Moreno, José
2018-05-01
Crop canopy water content (CWC) is an essential indicator of the crop's physiological state. While a diverse range of vegetation indices have earlier been developed for the remote estimation of CWC, most of them are defined for specific crop types and areas, making them less universally applicable. We propose two new water content indices applicable to a wide variety of crop types, allowing to derive CWC maps at a large spatial scale. These indices were developed based on PROSAIL simulations and then optimized with an experimental dataset (SPARC03; Barrax, Spain). This dataset consists of water content and other biophysical variables for five common crop types (lucerne, corn, potato, sugar beet and onion) and corresponding top-of-canopy (TOC) reflectance spectra acquired by the hyperspectral HyMap airborne sensor. First, commonly used water content index formulations were analysed and validated for the variety of crops, overall resulting in a R2 lower than 0.6. In an attempt to move towards more generically applicable indices, the two new CWC indices exploit the principal water absorption features in the near-infrared by using multiple bands sensitive to water content. We propose the Water Absorption Area Index (WAAI) as the difference between the area under the null water content of TOC reflectance (reference line) simulated with PROSAIL and the area under measured TOC reflectance between 911 and 1271 nm. We also propose the Depth Water Index (DWI), a simplified four-band index based on the spectral depths produced by the water absorption at 970 and 1200 nm and two reference bands. Both the WAAI and DWI outperform established indices in predicting CWC when applied to heterogeneous croplands, with a R2 of 0.8 and 0.7, respectively, using an exponential fit. However, these indices did not perform well for species with a low fractional vegetation cover (<30%). HyMap CWC maps calculated with both indices are shown for the Barrax region. The results confirmed the potential of using generically applicable indices for calculating CWC over a great variety of crops.
Evaluating SPLASH-2 Applications Using MapReduce
NASA Astrophysics Data System (ADS)
Zhu, Shengkai; Xiao, Zhiwei; Chen, Haibo; Chen, Rong; Zhang, Weihua; Zang, Binyu
MapReduce has been prevalent for running data-parallel applications. By hiding other non-functionality parts such as parallelism, fault tolerance and load balance from programmers, MapReduce significantly simplifies the programming of large clusters. Due to the mentioned features of MapReduce above, researchers have also explored the use of MapReduce on other application domains, such as machine learning, textual retrieval and statistical translation, among others.
Ruan, Cheng-Jiang; Xu, Xue-Xuan; Shao, Hong-Bo; Jaleel, Cheruth Abdul
2010-09-01
In the past 20 years, the major effort in plant breeding has changed from quantitative to molecular genetics with emphasis on quantitative trait loci (QTL) identification and marker assisted selection (MAS). However, results have been modest. This has been due to several factors including absence of tight linkage QTL, non-availability of mapping populations, and substantial time needed to develop such populations. To overcome these limitations, and as an alternative to planned populations, molecular marker-trait associations have been identified by the combination between germplasm and the regression technique. In the present preview, the authors (1) survey the successful applications of germplasm-regression-combined (GRC) molecular marker-trait association identification in plants; (2) describe how to do the GRC analysis and its differences from mapping QTL based on a linkage map reconstructed from the planned populations; (3) consider the factors that affect the GRC association identification, including selections of optimal germplasm and molecular markers and testing of identification efficiency of markers associated with traits; and (4) finally discuss the future prospects of GRC marker-trait association analysis used in plant MAS/QTL breeding programs, especially in long-juvenile woody plants when no other genetic information such as linkage maps and QTL are available.
Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce
Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng
2016-01-01
The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS – a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing. PMID:27617325
The psychological four-color mapping problem.
Francis, Gregory; Bias, Keri; Shive, Joshua
2010-06-01
Mathematicians have proven that four colors are sufficient to color 2-D maps so that no neighboring regions share the same color. Here we consider the psychological 4-color problem: Identifying which 4 colors should be used to make a map easy to use. We build a model of visual search for this design task and demonstrate how to apply it to the task of identifying the optimal colors for a map. We parameterized the model with a set of 7 colors using a visual search experiment in which human participants found a target region on a small map. We then used the model to predict search times for new maps and identified the color assignments that minimize or maximize average search time. The differences between these maps were predicted to be substantial. The model was then tested with a larger set of 31 colors on a map of English counties under conditions in which participants might memorize some aspects of the map. Empirical tests of the model showed that an optimally best colored version of this map is searched 15% faster than the correspondingly worst colored map. Thus, the color assignment seems to affect search times in a way predicted by the model, and this effect persists even when participants might use other sources of knowledge about target location. PsycINFO Database Record (c) 2010 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Ramirez-Lopez, Leonardo; Alexandre Dematte, Jose
2010-05-01
There is consensus in the scientific community about the great need of spatial soil information. Conventional mapping methods are time consuming and involve high costs. Digital soil mapping has emerged as an area in which the soil mapping is optimized by the application of mathematical and statistical approaches, as well as the application of expert knowledge in pedology. In this sense, the objective of the study was to develop a methodology for the spatial prediction of soil classes by using soil spectroscopy methodologies related with fieldwork, spectral data from satellite image and terrain attributes in simultaneous. The studied area is located in São Paulo State, and comprised an area of 473 ha, which was covered by a regular grid (100 x 100 m). In each grid node was collected soil samples at two depths (layers A and B). There were extracted 206 samples from transect sections and submitted to soil analysis (clay, Al2O3, Fe2O3, SiO2 TiO2, and weathering index). The first analog soil class map (ASC-N) contains only soil information regarding from orders to subgroups of the USDA Soil Taxonomy System. The second (ASC-H) map contains some additional information related to some soil attributes like color, ferric levels and base sum. For the elaboration of the digital soil maps the data was divided into three groups: i) Predicted soil attributes of the layer B (related to the soil weathering) which were obtained by using a local soil spectral library; ii) Spectral bands data extracted from a Landsat image; and iii) Terrain parameters. This information was summarized by a principal component analysis (PCA) in each group. Digital soil maps were generated by supervised classification using a maximum likelihood method. The trainee information for this classification was extracted from five toposequences based on the analog soil class maps. The spectral models of weathering soil attributes shown a high predictive performance with low error (R2 0.71 to 0.90). The spatial prediction of these attributes also showed a high performance (validations with R2> 0.78). These models allowed to increase spatial resolution of soil weathering information. On the other hand, the comparison between the analog and digital soil maps showed a global accuracy of 69% for the ASC-N map and 62% in the ASC-H map, with kappa indices of 0.52 and 0.45 respectively.
Fan, Mingyi; Hu, Jiwei; Cao, Rensheng; Ruan, Wenqian; Wei, Xionghui
2018-06-01
Water pollution occurs mainly due to inorganic and organic pollutants, such as nutrients, heavy metals and persistent organic pollutants. For the modeling and optimization of pollutants removal, artificial intelligence (AI) has been used as a major tool in the experimental design that can generate the optimal operational variables, since AI has recently gained a tremendous advance. The present review describes the fundamentals, advantages and limitations of AI tools. Artificial neural networks (ANNs) are the AI tools frequently adopted to predict the pollutants removal processes because of their capabilities of self-learning and self-adapting, while genetic algorithm (GA) and particle swarm optimization (PSO) are also useful AI methodologies in efficient search for the global optima. This article summarizes the modeling and optimization of pollutants removal processes in water treatment by using multilayer perception, fuzzy neural, radial basis function and self-organizing map networks. Furthermore, the results conclude that the hybrid models of ANNs with GA and PSO can be successfully applied in water treatment with satisfactory accuracies. Finally, the limitations of current AI tools and their new developments are also highlighted for prospective applications in the environmental protection. Copyright © 2018 Elsevier Ltd. All rights reserved.
AN EXPERIMENTAL ASSESSMENT OF MINIMUM MAPPING UNIT SIZE
Land-cover (LC) maps derived from remotely sensed data are often presented using a minimum mapping unit (MMU). The choice of a MMU that is appropriate for the projected use of a classification is important. The objective of this experiment was to determine the optimal MMU of a L...
Application of 3D Laser Scanning Technology in Complex Rock Foundation Design
NASA Astrophysics Data System (ADS)
Junjie, Ma; Dan, Lu; Zhilong, Liu
2017-12-01
Taking the complex landform of Tanxi Mountain Landscape Bridge as an example, the application of 3D laser scanning technology in the mapping of complex rock foundations is studied in this paper. A set of 3D laser scanning technologies are formed and several key engineering problems are solved. The first is 3D laser scanning technology of complex landforms. 3D laser scanning technology is used to obtain a complete 3D point cloud data model of the complex landform. The detailed and accurate results of the surveying and mapping decrease the measuring time and supplementary measuring times. The second is 3D collaborative modeling of the complex landform. A 3D model of the complex landform is established based on the 3D point cloud data model. The super-structural foundation model is introduced for 3D collaborative design. The optimal design plan is selected and the construction progress is accelerated. And the last is finite-element analysis technology of the complex landform foundation. A 3D model of the complex landform is introduced into ANSYS for building a finite element model to calculate anti-slide stability of the rock, and provides a basis for the landform foundation design and construction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, D; Nguyen, D; Voronenko, Y
Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2015-09-01
Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.
Growing a hypercubical output space in a self-organizing feature map.
Bauer, H U; Villmann, T
1997-01-01
Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de
A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.
Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
Learning and optimization with cascaded VLSI neural network building-block chips
NASA Technical Reports Server (NTRS)
Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.
1992-01-01
To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
Jarukanont, Daungruthai; Coimbra, João T S; Bauerhenne, Bernd; Fernandes, Pedro A; Patel, Shekhar; Ramos, Maria J; Garcia, Martin E
2014-10-21
We report on the viability of breaking selected bonds in biological systems using tailored electromagnetic radiation. We first demonstrate, by performing large-scale simulations, that pulsed electric fields cannot produce selective bond breaking. Then, we present a theoretical framework for describing selective energy concentration on particular bonds of biomolecules upon application of tailored electromagnetic radiation. The theory is based on the mapping of biomolecules to a set of coupled harmonic oscillators and on optimal control schemes to describe optimization of temporal shape, the phase and polarization of the external radiation. We have applied this theory to demonstrate the possibility of selective bond breaking in the active site of bacterial DNA topoisomerase. For this purpose, we have focused on a model that was built based on a case study. Results are given as a proof of concept.
ClusCo: clustering and comparison of protein models.
Jamroz, Michal; Kolinski, Andrzej
2013-02-22
The development, optimization and validation of protein modeling methods require efficient tools for structural comparison. Frequently, a large number of models need to be compared with the target native structure. The main reason for the development of Clusco software was to create a high-throughput tool for all-versus-all comparison, because calculating similarity matrix is the one of the bottlenecks in the protein modeling pipeline. Clusco is fast and easy-to-use software for high-throughput comparison of protein models with different similarity measures (cRMSD, dRMSD, GDT_TS, TM-Score, MaxSub, Contact Map Overlap) and clustering of the comparison results with standard methods: K-means Clustering or Hierarchical Agglomerative Clustering. The application was highly optimized and written in C/C++, including the code for parallel execution on CPU and GPU, which resulted in a significant speedup over similar clustering and scoring computation programs.
Rotariu, Mariana; Filep, R; Turnea, M; Ilea, M; Arotăriţei, D; Popescu, Marilena
2015-01-01
The prosthetic application is a highly complex process. Modeling and simulation of biomechanics processes in orthopedics is a certainly field of interest in current medical research. Optimization of socket in order to improve the quality of patient's life is a major objective in prosthetic rehabilitation. A variety of numerical methods for prosthetic application have been developed and studied. An objective method is proposed to evaluate the performance of a prosthetic patient according to surface pressure map over the residual limb. The friction coefficient due to various liners used in transtibial and transfemoral prosthesis is taken into account also. Creation of a bio-based modeling and mathematical simulation allows the design, construction and optimization of contact between the prosthesis cup and lack of functionality of the patient amputated considering the data collected and processed in real time and non-invasively. The von Mises stress distribution in muscle flap tissue at the bone ends shows a larger region subjected to elevated von Mises stresses in the muscle tissue underlying longer truncated bones. Finite element method was used to conduct a stress analysis and show the force distribution along the device. The results contribute to a better understanding the design of an optimized prosthesis that increase the patient's performance along with a god choice of liner, made by an appropriate material that fit better to a particular blunt. The study of prosthetic application is an exciting and important topic in research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical orthopedics.
The Insight ToolKit image registration framework
Avants, Brian B.; Tustison, Nicholas J.; Stauffer, Michael; Song, Gang; Wu, Baohua; Gee, James C.
2014-01-01
Publicly available scientific resources help establish evaluation standards, provide a platform for teaching and improve reproducibility. Version 4 of the Insight ToolKit (ITK4) seeks to establish new standards in publicly available image registration methodology. ITK4 makes several advances in comparison to previous versions of ITK. ITK4 supports both multivariate images and objective functions; it also unifies high-dimensional (deformation field) and low-dimensional (affine) transformations with metrics that are reusable across transform types and with composite transforms that allow arbitrary series of geometric mappings to be chained together seamlessly. Metrics and optimizers take advantage of multi-core resources, when available. Furthermore, ITK4 reduces the parameter optimization burden via principled heuristics that automatically set scaling across disparate parameter types (rotations vs. translations). A related approach also constrains steps sizes for gradient-based optimizers. The result is that tuning for different metrics and/or image pairs is rarely necessary allowing the researcher to more easily focus on design/comparison of registration strategies. In total, the ITK4 contribution is intended as a structure to support reproducible research practices, will provide a more extensive foundation against which to evaluate new work in image registration and also enable application level programmers a broad suite of tools on which to build. Finally, we contextualize this work with a reference registration evaluation study with application to pediatric brain labeling.1 PMID:24817849
Chan, Emory M; Xu, Chenxu; Mao, Alvin W; Han, Gang; Owen, Jonathan S; Cohen, Bruce E; Milliron, Delia J
2010-05-12
While colloidal nanocrystals hold tremendous potential for both enhancing fundamental understanding of materials scaling and enabling advanced technologies, progress in both realms can be inhibited by the limited reproducibility of traditional synthetic methods and by the difficulty of optimizing syntheses over a large number of synthetic parameters. Here, we describe an automated platform for the reproducible synthesis of colloidal nanocrystals and for the high-throughput optimization of physical properties relevant to emerging applications of nanomaterials. This robotic platform enables precise control over reaction conditions while performing workflows analogous to those of traditional flask syntheses. We demonstrate control over the size, size distribution, kinetics, and concentration of reactions by synthesizing CdSe nanocrystals with 0.2% coefficient of variation in the mean diameters across an array of batch reactors and over multiple runs. Leveraging this precise control along with high-throughput optical and diffraction characterization, we effectively map multidimensional parameter space to tune the size and polydispersity of CdSe nanocrystals, to maximize the photoluminescence efficiency of CdTe nanocrystals, and to control the crystal phase and maximize the upconverted luminescence of lanthanide-doped NaYF(4) nanocrystals. On the basis of these demonstrative examples, we conclude that this automated synthesis approach will be of great utility for the development of diverse colloidal nanomaterials for electronic assemblies, luminescent biological labels, electroluminescent devices, and other emerging applications.
A fast optimization approach for treatment planning of volumetric modulated arc therapy.
Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong
2018-05-30
Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.
Creating a water depth map from Earth Observation-derived flood extent and topography data
NASA Astrophysics Data System (ADS)
Matgen, Patrick; Giustarini, Laura; Chini, Marco; Hostache, Renaud; Pelich, Ramona; Schlaffer, Stefan
2017-04-01
Enhanced methods for monitoring temporal and spatial variations of water depth in rivers and floodplains are very important in operational water management. Currently, variations of water elevation can be estimated indirectly at the land-water interface using sequences of satellite EO imagery in combination with topographic data. In recent years high-resolution digital elevation models (DEM) and satellite EO data have become more readily available at global scale. This study introduces an approach for efficiently converting remote sensing-derived flood extent maps into water depth maps using a floodplain's topography information. For this we make the assumption of uniform flow, that is the depth of flow with respect to the drainage network is considered to be the same at every section of the floodplain. In other words, the depth of water above the nearest drainage is expected to be constant for a given river reach. To determine this value we first need the Height Above Nearest Drainage (HAND) raster obtained by using the area of interest's DEM as source topography and a shapefile of the river network. The HAND model normalizes the topography with respect to the drainage network. Next, the HAND raster is thresholded in order to generate a binary mask that optimally fits, over the entire region of study, the flood extent map obtained from SAR or any other remote sensing product, including aerial photographs. The optimal threshold value corresponds to the height of the water line above the nearest drainage, termed HANDWATER, and is considered constant for a given subreach. Once the HANDWATER has been optimized, a water depth map can be generated by subtracting the value of the HAND raster at the each location from this parameter value. These developments enable large scale and near real-time applications and only require readily available EO data, a DEM and the river network as input data. The approach is based on a hierarchical split-based approach that subdivides a drainage network into segments of variable length with evidence of uniform flow. The method has been tested with remote sensing data and DEM data that differ in terms of spatial resolution and accuracy. A comprehensive evaluation of the obtained water depth maps with hydrodynamic modelling results and in situ measured water level recordings was carried out on a reach of the river Severn located in the United Kingdom. First results show that the obtained root mean squared difference is 10 cm when using high resolution high precision data sets (i.e. aerial photographs of flood extent and a LiDAR-derived DEM) and amount to 50 cm when using as inputs moderate resolution SAR imagery from ENVISAT and a SRTM-derived DEM.
Criteria for the optimal selection of remote sensing optical images to map event landslides
NASA Astrophysics Data System (ADS)
Fiorucci, Federica; Giordan, Daniele; Santangelo, Michele; Dutto, Furio; Rossi, Mauro; Guzzetti, Fausto
2018-01-01
Landslides leave discernible signs on the land surface, most of which can be captured in remote sensing images. Trained geomorphologists analyse remote sensing images and map landslides through heuristic interpretation of photographic and morphological characteristics. Despite a wide use of remote sensing images for landslide mapping, no attempt to evaluate how the image characteristics influence landslide identification and mapping exists. This paper presents an experiment to determine the effects of optical image characteristics, such as spatial resolution, spectral content and image type (monoscopic or stereoscopic), on landslide mapping. We considered eight maps of the same landslide in central Italy: (i) six maps obtained through expert heuristic visual interpretation of remote sensing images, (ii) one map through a reconnaissance field survey, and (iii) one map obtained through a real-time kinematic (RTK) differential global positioning system (dGPS) survey, which served as a benchmark. The eight maps were compared pairwise and to a benchmark. The mismatch between each map pair was quantified by the error index, E. Results show that the map closest to the benchmark delineation of the landslide was obtained using the higher resolution image, where the landslide signature was primarily photographical (in the landslide source and transport area). Conversely, where the landslide signature was mainly morphological (in the landslide deposit) the best mapping result was obtained using the stereoscopic images. Albeit conducted on a single landslide, the experiment results are general, and provide useful information to decide on the optimal imagery for the production of event, seasonal and multi-temporal landslide inventory maps.
Anodized aluminum pressure sensitive paint for unsteady aerodynamic applications
NASA Astrophysics Data System (ADS)
Sakaue, Hirotaka
2003-06-01
A comprehensive study of anodized aluminum pressure sensitive paint (AA-PSP) is documented. The study consisted of the development of AA-PSP and its application to unsteady aerodynamic fields at atmospheric conditions. Luminophore application mechanism and two-component application on anodized aluminum was studied for the development. Two-component application includes hydrophobic-coated AA-PSP and bi-luminophore system. It was found that the polarity of solvents and the surface charge of anodized aluminum determine the optimized luminophore application. As a result, a wide variation of luminophore can be applied on anodized aluminum. To apply both components on anodized aluminum, optimum solvent polarities for each component should match. AA-PSP performances, such as pressure sensitivity, temperature dependency, signal level, and aging were improved by the luminophore application mechanism and two-component application. AA-PSPs demonstrate the capability of measuring surface pressures on unsteady aerodynamic fields. For an application to the Purdue Mach 4 Quiet Flow Ludwieg Tube, surface pressures on the order of a hundred Pascals were measured for approximately 200ms. The measurement uncertainty of the pressure was on the order of 5%. The main uncertainty source comes from fitting the adsorption control model to calibration points. The results compared qualitatively well to CFD calculations. A miniature fluidic oscillator was used to demonstrate the capability of measuring oscillating unsteady aerodynamic fields with 6.4kHz primary frequency. Flow oscillation images as well as pressure maps of various phases were captured by AA-PSP with PtTFPP as a luminophore (AA-PSPPtTFPP ). Main uncertainty source comes from fitting the adsorption control model to calibration points and from the pulse width of illumination. The measurement uncertainty of the pressure was 4.68%. AA-PSPPtTFPP was applied to a high-amplified acoustic fielding in a standing wave tube. The maximum pressure change created was 171dB (1.04psi). Sinusoidal pressure wave images inside a standing wave tube were captured at various phases. From these images, the integrated pressure map was obtained. In this case, measurement uncertainty was 3.64% and was due mainly to the pulse width and from fitting of the adsorption controlled model. Comparison with theoretical model is necessary to validate the integrated map as a streaming pattern.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
SSE-GIS v1.03 Web Mapping Application Now Available
Atmospheric Science Data Center
2018-03-16
SSE-GIS v1.03 Web Mapping Application Now Available Wednesday, July 6, 2016 ... you haven’t already noticed the link to the new SSE-GIS web application on the SSE homepage entitled “GIS Web Mapping Applications and Services”, we invite you to visit the site. ...
The topographic grain concept in DEM-based geomorphometric mapping
NASA Astrophysics Data System (ADS)
Józsa, Edina
2016-04-01
A common drawback of geomorphological analyses based on digital elevation datasets is the definition of search window size for the derivation of morphometric variables. The fixed-size neighbourhood determines the scale of the analysis and mapping, which can lead to the generalization of smaller surface details or the elimination of larger landform elements. The methods of DEM-based geomorphometric mapping are constantly developing into the direction of multi-scale landform delineation, but the optimal threshold for search window size is still a limiting factor. A possible way to determine the suitable value for the parameter is to consider the topographic grain principle (Wood, W. F. - Snell, J. B. 1960, Pike, R. J. et al. 1989). The calculation is implemented as a bash shell script for GRASS GIS to determine the optimal threshold for the r.geomorphon module. The approach relies on the potential of the topographic grain to detect the characteristic local ridgeline-to-channel spacing. By calculating the relative relief values with nested neighbourhood matrices it is possible to define a break-point where the increase rate of local relief encountered by the sample is significantly reducing. The geomorphons approach (Jasiewicz, J. - Stepinski, T. F. 2013) is a cell-based DEM classification method for the identification of landform elements at a broad range of scales by using line-of-sight technique. The landforms larger than the maximum lookup distance are broken down to smaller elements therefore the threshold needs to be set for a relatively large value. On the contrary, the computational requirements and the size of the study sites determine the upper limit for the value. Therefore the aim was to create a tool that would help to determine the optimal parameter for r.geomorphon tool. As a result it would be possible to produce more objective and consistent maps with achieving the full efficiency of this mapping technique. For the thorough analysis on the applicability of the proposed methodology a test site covering hilly and low mountainous regions in Southern Transdanubia, Hungary was chosen. As elevation dataset the freely available SRTM DSM with 1 arc-second resolution was used, after implementing necessary error correction. Based on the delineated landform elements and morphometric variables the physiographic characteristics of the landscape could be analysed and compared with the existing expert-based map of microregions. References: Wood, W. F. and J. B. Snell (1960). A quantitative system for classifying landforms. - Technical Report EP-124. U.S. Army Quartermaster Research and Engineering Center, Natick, 20 pp. Pike, R. J., et al. (1989). Topographic grain automated from digital elevation models. - Proceedings, Auto-Carto 9, ASPRS/ACSM Baltimore MD, 2-7 April 1989. Jasiewicz, J. and T. F. Stepinski (2013). Geomorphons - a pattern recognition approach to classification and mapping of landforms. - Geomorphology 182(0): 147-156.
ERIC Educational Resources Information Center
Wang, Kening; Mulvenon, Sean W.; Stegman, Charles; Anderson, Travis
2008-01-01
Google Maps API (Application Programming Interface), released in late June 2005 by Google, is an amazing technology that allows users to embed Google Maps in their own Web pages with JavaScript. Google Maps API has accelerated the development of new Google Maps based applications. This article reports a Web-based interactive mapping system…
The Arctic Observing Viewer: A Web-mapping Application for U.S. Arctic Observing Activities
NASA Astrophysics Data System (ADS)
Cody, R. P.; Manley, W. F.; Gaylord, A. G.; Kassin, A.; Villarreal, S.; Barba, M.; Dover, M.; Escarzaga, S. M.; Habermann, T.; Kozimor, J.; Score, R.; Tweedie, C. E.
2015-12-01
Although a great deal of progress has been made with various arctic observing efforts, it can be difficult to assess such progress when so many agencies, organizations, research groups and others are making such rapid progress over such a large expanse of the Arctic. To help meet the strategic needs of the U.S. SEARCH-AON program and facilitate the development of SAON and other related initiatives, the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org) has been developed. This web mapping application compiles detailed information pertaining to U.S. Arctic Observing efforts. Contributing partners include the U.S. NSF, USGS, ACADIS, ADIwg, AOOS, a2dc, AON, ARMAP, BAID, IASOA, INTERACT, and others. Over 7700 observation sites are currently in the AOV database and the application allows users to visualize, navigate, select, advance search, draw, print, and more. During 2015, the web mapping application has been enhanced by the addition of a query builder that allows users to create rich and complex queries. AOV is founded on principles of software and data interoperability and includes an emerging "Project" metadata standard, which uses ISO 19115-1 and compatible web services. Substantial efforts have focused on maintaining and centralizing all database information. In order to keep up with emerging technologies, the AOV data set has been structured and centralized within a relational database and the application front-end has been ported to HTML5 to enable mobile access. Other application enhancements include an embedded Apache Solr search platform which provides users with the capability to perform advance searches and an administration web based data management system that allows administrators to add, update, and delete information in real time. We encourage all collaborators to use AOV tools and services for their own purposes and to help us extend the impact of our efforts and ensure AOV complements other cyber-resources. Reinforcing dispersed but interoperable resources in this way will help to ensure improved capacities for conducting activities such as assessing the status of arctic observing efforts, optimizing logistic operations, and for quickly accessing external and project-focused web resources for more detailed information and access to scientific data and derived products.
Liver retraction system by C3-muco-adhesive polymer films for laparoscopic surgery.
Aldeghaither, Saud; Tang, Benjie; Alijani, Afshin; McLean, Donald; Wright, Emma; Wang, Zhigang; Tait, Iain; Cuschieri, Alfred
2016-07-01
Conventional laparoscopic instruments used for retraction may cause trauma at the retraction site. Alternative retraction/lifting especially of heavy solid organs such as the liver may be obtained by other means. The present study was designed to explore the use of C3-muco-adhesive polymers (C3-MAPs), which exhibit strong binding to the liver shortly after application to the organ and which retain strong adhesion for sufficient time, to enable sustained retraction during laparoscopic operations. C3-muco-adhesive polymers were produced specifically for the study. In an ex vivo experimental set-up, discs of C3-MAPs were placed on the surface of porcine livers for adhesion and retraction studies involving objective measurements by tensiometry. Experiments were carried out on 14 porcine livers. The force required to detach the C3-MAPs from the liver exceeded 2.0 N 30 s after application. The adhesion force by C3-MAPs files was sufficient to enable sustained retraction force necessary for exposure of the gall bladder, which was achieved by a mean retraction force of 4.85 N (SD = 0.63). This was sustained for a mean of 130 min (range 17.0-240.0). In the adhesion studies, the forces at 30 s required to detach the polymer discs from the liver exceeded 20 N (upper limit of the load cells of the Instron). The duration of the adhesion enabled sustained optimal gall bladder exposure for periods ranging from 17 to 240 min, with a mean of 130 ± 91 min. The results of the present study demonstrate that the adhesion and retraction properties of the engineered C3-MAP films are sufficient to enable complete exposure of the gall bladder for a period exceeding 1 h, confirming their potential for atraumatic retraction in laparoscopic and other minimal-access surgical approaches.
Surrogate based wind farm layout optimization using manifold mapping
NASA Astrophysics Data System (ADS)
Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester
2016-09-01
High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.
Local search for optimal global map generation using mid-decadal landsat images
Khatib, L.; Gasch, J.; Morris, Robert; Covington, S.
2007-01-01
NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Choosing colors for map display icons using models of visual search.
Shive, Joshua; Francis, Gregory
2013-04-01
We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.
NASA Astrophysics Data System (ADS)
Hu, Xuemin; Chen, Long; Tang, Bo; Cao, Dongpu; He, Haibo
2018-02-01
This paper presents a real-time dynamic path planning method for autonomous driving that avoids both static and moving obstacles. The proposed path planning method determines not only an optimal path, but also the appropriate acceleration and speed for a vehicle. In this method, we first construct a center line from a set of predefined waypoints, which are usually obtained from a lane-level map. A series of path candidates are generated by the arc length and offset to the center line in the s - ρ coordinate system. Then, all of these candidates are converted into Cartesian coordinates. The optimal path is selected considering the total cost of static safety, comfortability, and dynamic safety; meanwhile, the appropriate acceleration and speed for the optimal path are also identified. Various types of roads, including single-lane roads and multi-lane roads with static and moving obstacles, are designed to test the proposed method. The simulation results demonstrate the effectiveness of the proposed method, and indicate its wide practical application to autonomous driving.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adaptingmore » Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.« less
Demonstration of decomposition and optimization in the design of experimental space systems
NASA Technical Reports Server (NTRS)
Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.
1989-01-01
Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.
Han, Min; Fan, Jianchao; Wang, Jun
2011-09-01
A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.
NASA Astrophysics Data System (ADS)
Terrazzino, Alfonso; Volponi, Silvia; Borgogno Mondino, Enrico
2001-12-01
An investigation has been carried out, concerning remote sensing techniques, in order to assess their potential application to the energy system business: the most interesting results concern a new approach, based on digital data from remote sensing, to infrastructures with a large territorial distribution: in particular OverHead Transmission Lines, for the high voltage transmission and distribution of electricity on large distances. Remote sensing could in principle be applied to all the phases of the system lifetime, from planning to design, to construction, management, monitoring and maintenance. In this article, a remote sensing based approach is presented, targeted to the line planning: optimization of OHTLs path and layout, according to different parameters (technical, environmental and industrial). Planning new OHTLs is of particular interest in emerging markets, where typically the cartography is missing or available only on low accuracy scale (1:50.000 and lower), often not updated. Multi- spectral images can be used to generate thematic maps of the region of interest for the planning (soil coverage). Digital Elevation Models (DEMs), allow the planners to easily access the morphologic information of the surface. Other auxiliary information from local laws, environmental instances, international (IEC) standards can be integrated in order to perform an accurate optimized path choice and preliminary spotting of the OHTLs. This operation is carried out by an ABB proprietary optimization algorithm: the output is a preliminary path that bests fits the optimization parameters of the line in a life cycle approach.
Creating affordable Internet map server applications for regional scale applications.
Lembo, Arthur J; Wagenet, Linda P; Schusler, Tania; DeGloria, Stephen D
2007-12-01
This paper presents an overview and process for developing an Internet Map Server (IMS) application for a local volunteer watershed group using an Internal Internet Map Server (IIMS) strategy. The paper illustrates that modern GIS architectures utilizing an internal Internet map server coupled with a spatial SQL command language allow for rapid development of IMS applications. The implication of this approach means that powerful IMS applications can be rapidly and affordably developed for volunteer organizations that lack significant funds or a full time information technology staff.
NASA Astrophysics Data System (ADS)
Reerink, Thomas J.; van de Berg, Willem Jan; van de Wal, Roderik S. W.
2016-11-01
This paper accompanies the second OBLIMAP open-source release. The package is developed to map climate fields between a general circulation model (GCM) and an ice sheet model (ISM) in both directions by using optimal aligned oblique projections, which minimize distortions. The curvature of the surfaces of the GCM and ISM grid differ, both grids may be irregularly spaced and the ratio of the grids is allowed to differ largely. OBLIMAP's stand-alone version is able to map data sets that differ in various aspects on the same ISM grid. Each grid may either coincide with the surface of a sphere, an ellipsoid or a flat plane, while the grid types might differ. Re-projection of, for example, ISM data sets is also facilitated. This is demonstrated by relevant applications concerning the major ice caps. As the stand-alone version also applies to the reverse mapping direction, it can be used as an offline coupler. Furthermore, OBLIMAP 2.0 is an embeddable GCM-ISM coupler, suited for high-frequency online coupled experiments. A new fast scan method is presented for structured grids as an alternative for the former time-consuming grid search strategy, realising a performance gain of several orders of magnitude and enabling the mapping of high-resolution data sets with a much larger number of grid nodes. Further, a highly flexible masked mapping option is added. The limitation of the fast scan method with respect to unstructured and adaptive grids is discussed together with a possible future parallel Message Passing Interface (MPI) implementation.
An atomic model of brome mosaic virus using direct electron detection and real-space optimization.
Wang, Zhao; Hryc, Corey F; Bammes, Benjamin; Afonine, Pavel V; Jakana, Joanita; Chen, Dong-Hua; Liu, Xiangan; Baker, Matthew L; Kao, Cheng; Ludtke, Steven J; Schmid, Michael F; Adams, Paul D; Chiu, Wah
2014-09-04
Advances in electron cryo-microscopy have enabled structure determination of macromolecules at near-atomic resolution. However, structure determination, even using de novo methods, remains susceptible to model bias and overfitting. Here we describe a complete workflow for data acquisition, image processing, all-atom modelling and validation of brome mosaic virus, an RNA virus. Data were collected with a direct electron detector in integrating mode and an exposure beyond the traditional radiation damage limit. The final density map has a resolution of 3.8 Å as assessed by two independent data sets and maps. We used the map to derive an all-atom model with a newly implemented real-space optimization protocol. The validity of the model was verified by its match with the density map and a previous model from X-ray crystallography, as well as the internal consistency of models from independent maps. This study demonstrates a practical approach to obtain a rigorously validated atomic resolution electron cryo-microscopy structure.
Duchateau, Nicolas; Kostantyn Butakov, Constantine Butakoff; Andreu, David; Fernández-Armenta, Juan; Bijnens, Bart; Berruezo, Antonio; Sitges, Marta; Camara, Oscar
2017-01-01
Electro-anatomical maps (EAMs) are commonly acquired in clinical routine for guiding ablation therapies. They provide voltage and activation time information on a 3-D anatomical mesh representation, making them useful for analyzing the electrical activation patterns in specific pathologies. However, the variability between the different acquisitions and anatomies hampers the comparison between different maps. This paper presents two contributions for the analysis of electrical patterns in EAM data from biventricular surfaces of cardiac chambers. The first contribution is an integrated automatic 2-D disk representation (2-D bull’s eye plot) of the left ventricle (LV) and right ventricle (RV) obtained with a quasi-conformal mapping from the 3-D EAM meshes, that allows an analysis of cardiac resynchronization therapy (CRT) lead positioning, interpretation of global (total activation time), and local indices (local activation time (LAT), surrogates of conduction velocity, inter-ventricular, and transmural delays) that characterize changes in the electrical activation pattern. The second contribution is a set of indices derived from the electrical activation: speed maps, computed from LAT values, to study the electrical wave propagation, and histograms of isochrones to analyze regional electrical heterogeneities in the ventricles. We have applied the proposed methods to look for the underlying physiological mechanisms of left bundle branch block (LBBB) and CRT, with the goal of optimizing the therapy by improving CRT response. To better illustrate the benefits of the proposed tools, we created a set of synthetically generated and fully controlled activation patterns, where the proposed representation and indices were validated. Then, the proposed analysis tools are used to analyze EAM data from an experimental swine model of induced LBBB with an implanted CRT device. We have analyzed and compared the electrical activation patterns at baseline, LBBB, and CRT stages in four animals: two without any structural disease and two with an induced infarction. By relating the CRT lead location with electrical dyssynchrony, we evaluated current hypotheses about lead placement in CRT and showed that optimal pacing sites should target the RV lead close to the apex and the LV one distant from it. PMID:29164019
Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick
2015-01-01
UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R2-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R2-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications. PMID:26437410
Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick
2015-09-30
UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications.
Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A
2014-08-01
The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.
2011-01-01
Background Technological advances are progressively increasing the application of genomics to a wider array of economically and ecologically important species. High-density maps enriched for transcribed genes facilitate the discovery of connections between genes and phenotypes. We report the construction of a high-density linkage map of expressed genes for the heterozygous genome of Eucalyptus using Single Feature Polymorphism (SFP) markers. Results SFP discovery and mapping was achieved using pseudo-testcross screening and selective mapping to simultaneously optimize linkage mapping and microarray costs. SFP genotyping was carried out by hybridizing complementary RNA prepared from 4.5 year-old trees xylem to an SFP array containing 103,000 25-mer oligonucleotide probes representing 20,726 unigenes derived from a modest size expressed sequence tags collection. An SFP-mapping microarray with 43,777 selected candidate SFP probes representing 15,698 genes was subsequently designed and used to genotype SFPs in a larger subset of the segregating population drawn by selective mapping. A total of 1,845 genes were mapped, with 884 of them ordered with high likelihood support on a framework map anchored to 180 microsatellites with average density of 1.2 cM. Using more probes per unigene increased by two-fold the likelihood of detecting segregating SFPs eventually resulting in more genes mapped. In silico validation showed that 87% of the SFPs map to the expected location on the 4.5X draft sequence of the Eucalyptus grandis genome. Conclusions The Eucalyptus 1,845 gene map is the most highly enriched map for transcriptional information for any forest tree species to date. It represents a major improvement on the number of genes previously positioned on Eucalyptus maps and provides an initial glimpse at the gene space for this global tree genome. A general protocol is proposed to build high-density transcript linkage maps in less characterized plant species by SFP genotyping with a concurrent objective of reducing microarray costs. HIgh-density gene-rich maps represent a powerful resource to assist gene discovery endeavors when used in combination with QTL and association mapping and should be especially valuable to assist the assembly of reference genome sequences soon to come for several plant and animal species. PMID:21492453
Eytan, Danny; Pang, Elizabeth W; Doesburg, Sam M; Nenadovic, Vera; Gavrilovic, Bojan; Laussen, Peter; Guerguerian, Anne-Marie
2016-01-01
Acute brain injury is a common cause of death and critical illness in children and young adults. Fundamental management focuses on early characterization of the extent of injury and optimizing recovery by preventing secondary damage during the days following the primary injury. Currently, bedside technology for measuring neurological function is mainly limited to using electroencephalography (EEG) for detection of seizures and encephalopathic features, and evoked potentials. We present a proof of concept study in patients with acute brain injury in the intensive care setting, featuring a bedside functional imaging set-up designed to map cortical brain activation patterns by combining high density EEG recordings, multi-modal sensory stimulation (auditory, visual, and somatosensory), and EEG source modeling. Use of source-modeling allows for examination of spatiotemporal activation patterns at the cortical region level as opposed to the traditional scalp potential maps. The application of this system in both healthy and brain-injured participants is demonstrated with modality-specific source-reconstructed cortical activation patterns. By combining stimulation obtained with different modalities, most of the cortical surface can be monitored for changes in functional activation without having to physically transport the subject to an imaging suite. The results in patients in an intensive care setting with anatomically well-defined brain lesions suggest a topographic association between their injuries and activation patterns. Moreover, we report the reproducible application of a protocol examining a higher-level cortical processing with an auditory oddball paradigm involving presentation of the patient's own name. This study reports the first successful application of a bedside functional brain mapping tool in the intensive care setting. This application has the potential to provide clinicians with an additional dimension of information to manage critically-ill children and adults, and potentially patients not suited for magnetic resonance imaging technologies.
[Peroperative risks in cerebral aneurysm surgery].
Mustaki, J P; Bissonnette, B; Archer, D; Boulard, G; Ravussin, P
1996-01-01
The perioperative complications associated with cerebral aneurysm surgery require a specific anaesthetic management. Four major perioperative accidents are discussed in this review. The anaesthetic and surgical management in case of rebleeding subsequent to the re-rupture of the aneurysm is mainly prophylactic. It includes haemodynamic stability assurance, maintenance of mean arterial pressure (MAP) between 80-90 mmHg during stimulation of the patient such as endotracheal intubation, application of the skull-pin head-holder, incision, and craniotomy. The aneurysmal transmural pressure should be adequately maintained by avoiding an aggressive decrease of intracranial pressure. Once the skull is open, the brain must be kept slack in order to decrease pressure under the retractors and avoid the risks of stretching and tearing of the adjacent vessels. If, despite these precautions, the aneurysm ruptures again. MAP should be decreased to 60 mmHg and the brain rendered more slack, in order to allow direct clipping of the aneurysm, or temporary clipping of the adjacent vessels. The optimal agents in this situation are isoflurane (which decreases CMRO2), intravenous anaesthetic agents (inspite their negative inotropic effect, they may potentially protect the brain) and sodium nitroprusside. Vasospasm occurs usually between the 3rd and the 7th day after subarachnoid haemorrhage. It may be seen peroperatively. The optimal treatment, as well as prophylaxis, is moderate controlled hypertension (MAP > 100 mmHg), associated with hypervolaemia and haemodilution, the so-called triple H therapy, with strict control of the filling pressures. Other beneficial therapies are calcium antagonists (nimodipine and nicardipine), the removal of the blood accumulated around the brain and in the cisternae, and possibly local administration of papaverine. Abrupt MAP increases are controlled in order to maintain adequate aneurysmal transmural pressure. Beta-blockers, local anaesthetics administered locally or intravenously, a carefully titrated level of anaesthesia, a maintained volaemia play a protective role. Cerebral oedema is sometimes already present at the opening of the skull or may arise later, due to a high pressure under the retractors, to the surgical manipulations of the brain or to brain ischaemia subsequent to temporary clipping. Its treatment is aggressive, with intravenous agents, mannitol, deep hypocapnia and/or lumbar drainage. Prophylaxis, according to the "brain homeostasis concept", is the preferred method to avoid these four peroperative accidents. It includes normal blood volume, normoglycaemia, moderate hypocapnia, normotension, soft manipulation of the brain and optimal brain relaxation.
Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data
NASA Astrophysics Data System (ADS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masui, K. W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.; Yadav, J.
2017-02-01
We present the first application of a new foreground removal pipeline to the current leading H I intensity mapping data set, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h-field data of the GBT observations previously presented in Mausui et al. and Switzer et al., covering about 41 deg2 at 0.6 < z < 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point-source contamination using an independent component analysis technique (FASTICA), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that FASTICA is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps are dominated by instrumental noise on small scales which FASTICA, as a conservative subtraction technique of non-Gaussian signals, cannot mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the singular value decomposition (SVD) method, and confirm that foreground subtraction with FASTICA is robust against 21 cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and FASTICA are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping data sets.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
An efficient approach to the travelling salesman problem using self-organizing maps.
Vieira, Frederico Carvalho; Dória Neto, Adrião Duarte; Costa, José Alfredo Ferreira
2003-04-01
This paper presents an approach to the well-known Travelling Salesman Problem (TSP) using Self-Organizing Maps (SOM). The SOM algorithm has interesting topological information about its neurons configuration on cartesian space, which can be used to solve optimization problems. Aspects of initialization, parameters adaptation, and complexity analysis of the proposed SOM based algorithm are discussed. The results show an average deviation of 3.7% from the optimal tour length for a set of 12 TSP instances.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters.
Bindima, Thayyil; Elias, Elizabeth
2016-05-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.
NASA Astrophysics Data System (ADS)
Yin, Y.; Sonka, M.
2010-03-01
A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in 60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning errors, respectively).
Design of efficient circularly symmetric two-dimensional variable digital FIR filters
Bindima, Thayyil; Elias, Elizabeth
2016-01-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739
Description of a user-oriented geographic information system - The resource analysis program
NASA Technical Reports Server (NTRS)
Tilmann, S. E.; Mokma, D. L.
1980-01-01
This paper describes the Resource Analysis Program, an applied geographic information system. Several applications are presented which utilized soil, and other natural resource data, to develop integrated maps and data analyses. These applications demonstrate the methods of analysis and the philosophy of approach used in the mapping system. The applications are evaluated in reference to four major needs of a functional mapping system: data capture, data libraries, data analysis, and mapping and data display. These four criteria are then used to describe an effort to develop the next generation of applied mapping systems. This approach uses inexpensive microcomputers for field applications and should prove to be a viable entry point for users heretofore unable or unwilling to venture into applied computer mapping.
Duryan, Meri; Nikolik, Dragan; van Merode, Godefridus; Curfs, Leopold M G
2015-01-01
The central aspect of this study is a set of reflections on the efficacy of soft operational research techniques in understanding the dynamics of a complex system such as intellectual disability (ID) care providers. Organizations providing services to ID patients are complex and have many interacting stakeholders with often different and competing interests. Understanding the causes for failures in complex systems is crucial for appreciating the multiple perspectives of the key stakeholders of the system. Knowing the factors that adversely affect delivery of a patient-centred care by ID provider organizations offers the potential for identifying more effective resource-allocation solutions. The authors suggest cognitive mapping as a starting point for system dynamics modelling of optimal resource-allocation projects in ID care. The application of the method is illustrated via a case study in one of the ID care providers in the Netherlands. The paper discusses some of the practical implications of applying problem-structuring methods that support gathering feedback from vulnerable service users and front-line workers. The authors concluded that cognitive mapping technique can assist the management of healthcare organizations in strategic decision-making. Copyright © 2013 John Wiley & Sons, Ltd.
Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.
Xiao, Dan; Balcom, Bruce J
2012-07-01
Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.
Macromolecular target prediction by self-organizing feature maps.
Schneider, Gisbert; Schneider, Petra
2017-03-01
Rational drug discovery would greatly benefit from a more nuanced appreciation of the activity of pharmacologically active compounds against a diverse panel of macromolecular targets. Already, computational target-prediction models assist medicinal chemists in library screening, de novo molecular design, optimization of active chemical agents, drug re-purposing, in the spotting of potential undesired off-target activities, and in the 'de-orphaning' of phenotypic screening hits. The self-organizing map (SOM) algorithm has been employed successfully for these and other purposes. Areas covered: The authors recapitulate contemporary artificial neural network methods for macromolecular target prediction, and present the basic SOM algorithm at a conceptual level. Specifically, they highlight consensus target-scoring by the employment of multiple SOMs, and discuss the opportunities and limitations of this technique. Expert opinion: Self-organizing feature maps represent a straightforward approach to ligand clustering and classification. Some of the appeal lies in their conceptual simplicity and broad applicability domain. Despite known algorithmic shortcomings, this computational target prediction concept has been proven to work in prospective settings with high success rates. It represents a prototypic technique for future advances in the in silico identification of the modes of action and macromolecular targets of bioactive molecules.
De la Sen, Manuel; Abbas, Mujahid; Saleem, Naeem
2016-01-01
This paper discusses some convergence properties in fuzzy ordered proximal approaches defined by [Formula: see text]-sequences of pairs, where [Formula: see text] is a surjective self-mapping and [Formula: see text] where Aand Bare nonempty subsets of and abstract nonempty set X and [Formula: see text] is a partially ordered non-Archimedean fuzzy metric space which is endowed with a fuzzy metric M, a triangular norm * and an ordering [Formula: see text] The fuzzy set M takes values in a sequence or set [Formula: see text] where the elements of the so-called switching rule [Formula: see text] are defined from [Formula: see text] to a subset of [Formula: see text] Such a switching rule selects a particular realization of M at the nth iteration and it is parameterized by a growth evolution sequence [Formula: see text] and a sequence or set [Formula: see text] which belongs to the so-called [Formula: see text]-lower-bounding mappings which are defined from [0, 1] to [0, 1]. Some application examples concerning discrete systems under switching rules and best approximation solvability of algebraic equations are discussed.
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-05-11
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.
NASA Technical Reports Server (NTRS)
Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak
2003-01-01
In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit
2015-01-01
Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit
2015-01-01
Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.
Optimum aerodynamic design via boundary control
NASA Technical Reports Server (NTRS)
Jameson, Antony
1994-01-01
These lectures describe the implementation of optimization techniques based on control theory for airfoil and wing design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. Recently the method has been implemented in an alternative formulation which does not depend on conformal mapping, so that it can more easily be extended to treat general configurations. The method has also been extended to treat the Euler equations, and results are presented for both two and three dimensional cases, including the optimization of a swept wing.
The Effects of a Concept Map-Based Support Tool on Simulation-Based Inquiry Learning
ERIC Educational Resources Information Center
Hagemans, Mieke G.; van der Meij, Hans; de Jong, Ton
2013-01-01
Students often need support to optimize their learning in inquiry learning environments. In 2 studies, we investigated the effects of adding concept-map-based support to a simulation-based inquiry environment on kinematics. The concept map displayed the main domain concepts and their relations, while dynamic color coding of the concepts displayed…
Wood transportation systems-a spin-off of a computerized information and mapping technique
William W. Phillips; Thomas J. Corcoran
1978-01-01
A computerized mapping system originally developed for planning the control of the spruce budworm in Maine has been extended into a tool for planning road net-work development and optimizing transportation costs. A budgetary process and a mathematical linear programming routine are used interactively with the mapping and information retrieval capabilities of the system...
USDA-ARS?s Scientific Manuscript database
Mycobacterium avium subsp. paratuberculosis (MAP) is shed into milk and feces of cows with advanced Johne’s disease, allowing transmission of MAP among animals. The objective of this study was to formulate an optimized protocol for the isolation of MAP from milk. Parameters investigated included che...
[AWAKE CRANIOTOMY: IN SEARCH FOR OPTIMAL SEDATION].
Kulikova, A S; Sel'kov, D A; Kobyakov, G L; Shmigel'skiy, A V; Lubnin, A Yu
2015-01-01
Awake craniotomy is a "gold standard"for intraoperative brain language mapping. One of the main anesthetic challenge of awake craniotomy is providing of optimal sedation for initial stages of intervention. The goal of this study was comparison of different technics of anesthesia for awake craniotomy. Materials and methods: 162 operations were divided in 4 groups: 76 cases with propofol sedation (2-4mg/kg/h) without airway protection; 11 cases with propofol sedation (4-5 mg/kg/h) with MV via LMA; 36 cases of xenon anesthesia; and 39 cases with dexmedetomidine sedation without airway protection. Results and discussion: brain language mapping was successful in 90% of cases. There was no difference between groups in successfulness of brain mapping. However in the first group respiratory complications were more frequent. Three other technics were more safer Xenon anesthesia was associated with ultrafast awakening for mapping (5±1 min). Dexmedetomidine sedation provided high hemodynamic and respiratory stability during the procedure.
NASA Astrophysics Data System (ADS)
Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.
2015-12-01
We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.
Android Based Binus Profile Applications as the Marketing Tools of Bina Nusantara University
NASA Astrophysics Data System (ADS)
Iskandar, Karto
2014-03-01
Smart phones with apps in it is not a new phenomenon. Both of technologies have been fused with the lifestyle today. The ease and speed of access to information makes a lot of companies use it in the process of marketing a product to the public. Objective of this action is to win the competition that more competitive. The purpose of this research is to create mobile application android based to assist in the marketing and introduction Bina Nusantara University profile to prospective students. This research method using software engineering waterfall model to produce Android-based mobile applications. The results in the form of Android-based mobile application that can be used as a viral marketing tool for Bina Nusantara University. At the end of this study can be generated that mobile technology can be used as a media for effective marketing and branding, especially for Bina Nusantara University. Android technology based for marketing applications suited to the Bina Nusantara University applicant segment which are generally young people. The future along with the improvement of network quality and affordable cost, then the application can be made online, so features such as chat, maps, and other can be used optimally.
Multiple indicator cokriging with application to optimal sampling for environmental monitoring
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.
2005-02-01
A probabilistic solution to the problem of spatial interpolation of a variable at an unsampled location consists of estimating the local cumulative distribution function (cdf) of the variable at that location from values measured at neighbouring locations. As this distribution is conditional to the data available at neighbouring locations it incorporates the uncertainty of the value of the variable at the unsampled location. Geostatistics provides a non-parametric solution to such problems via the various forms of indicator kriging. In a least squares sense indicator cokriging is theoretically the best estimator but in practice its use has been inhibited by problems such as an increased number of violations of order relations constraints when compared with simpler forms of indicator kriging. In this paper, we describe a methodology and an accompanying computer program for estimating a vector of indicators by simple indicator cokriging, i.e. simultaneous estimation of the cdf for K different thresholds, {F(u,zk),k=1,…,K}, by solving a unique cokriging system for each location at which an estimate is required. This approach produces a variance-covariance matrix of the estimated vector of indicators which is used to fit a model to the estimated local cdf by logistic regression. This model is used to correct any violations of order relations and automatically ensures that all order relations are satisfied, i.e. the estimated cumulative distribution function, F^(u,zk), is such that: F^(u,zk)∈[0,1],∀zk,andF^(u,zk)⩽F^(u,z)forzk
Duary, Raj Kumar; Batish, Virender Kumar; Grover, Sunita
2012-03-01
Probiotic bacteria must overcome the toxicity of bile salts secreted in the gut and adhere to the epithelial cells to enable their better colonization with extended transit time. Expression of bile salt hydrolase and other proteins on the surface of probiotic bacteria can help in better survivability and optimal functionality in the gut. Two putative Lactobacillus plantarum isolates i.e., Lp9 and Lp91 along with standard strain CSCC5276 were used. A battery of six housekeeping genes viz. gapB, dnaG, gyrA, ldhD, rpoD and 16S rRNA were evaluated by using geNorm 3.4 excel based application for normalizing the expression of bile salt hydrolase (bsh), mucus-binding protein (mub), mucus adhesion promoting protein (mapA), and elongation factor thermo unstable (EF-Tu) in Lp9 and Lp91. The maximal level of relative bsh gene expression was recorded in Lp91 with 2.89 ± 0.14, 4.57 ± 0.37 and 6.38 ± 0.19 fold increase at 2% bile salt concentration after 1, 2 and 3 h, respectively. Similarly, mub and mapA genes were maximally expressed in Lp9 at the level of 20.07 ± 1.28 and 30.92 ± 1.51 fold, when MRS was supplemented with 0.05% mucin and 1% each of bile and pancreatin (pH 6.5). However, in case of EF-Tu, the maximal expression of 42.84 ± 5.64 fold was recorded in Lp91 in the presence of mucin alone (0.05%). Hence, the expression of bsh, mub, mapA and EF-Tu could be considered as prospective biomarkers for screening of novel probiotic lactobacillus strains for optimal functionality in the gut.
Halper, Sean M; Cetnar, Daniel P; Salis, Howard M
2018-01-01
Engineering many-enzyme metabolic pathways suffers from the design curse of dimensionality. There are an astronomical number of synonymous DNA sequence choices, though relatively few will express an evolutionary robust, maximally productive pathway without metabolic bottlenecks. To solve this challenge, we have developed an integrated, automated computational-experimental pipeline that identifies a pathway's optimal DNA sequence without high-throughput screening or many cycles of design-build-test. The first step applies our Operon Calculator algorithm to design a host-specific evolutionary robust bacterial operon sequence with maximally tunable enzyme expression levels. The second step applies our RBS Library Calculator algorithm to systematically vary enzyme expression levels with the smallest-sized library. After characterizing a small number of constructed pathway variants, measurements are supplied to our Pathway Map Calculator algorithm, which then parameterizes a kinetic metabolic model that ultimately predicts the pathway's optimal enzyme expression levels and DNA sequences. Altogether, our algorithms provide the ability to efficiently map the pathway's sequence-expression-activity space and predict DNA sequences with desired metabolic fluxes. Here, we provide a step-by-step guide to applying the Pathway Optimization Pipeline on a desired multi-enzyme pathway in a bacterial host.
OpenMSI Arrayed Analysis Tools v2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOWEN, BENJAMIN; RUEBEL, OLIVER; DE ROND, TRISTAN
2017-02-07
Mass spectrometry imaging (MSI) enables high-resolution spatial mapping of biomolecules in samples and is a valuable tool for the analysis of tissues from plants and animals, microbial interactions, high-throughput screening, drug metabolism, and a host of other applications. This is accomplished by desorbing molecules from the surface on spatially defined locations, using a laser or ion beam. These ions are analyzed by a mass spectrometry and collected into a MSI 'image', a dataset containing unique mass spectra from the sampled spatial locations. MSI is used in a diverse and increasing number of biological applications. The OpenMSI Arrayed Analysis Tool (OMAAT)more » is a new software method that addresses the challenges of analyzing spatially defined samples in large MSI datasets, by providing support for automatic sample position optimization and ion selection.« less
Adhikari, Shyam Prasad; Yang, Changju; Slot, Krzysztof; Kim, Hyongsuk
2018-01-10
This paper presents a vision sensor-based solution to the challenging problem of detecting and following trails in highly unstructured natural environments like forests, rural areas and mountains, using a combination of a deep neural network and dynamic programming. The deep neural network (DNN) concept has recently emerged as a very effective tool for processing vision sensor signals. A patch-based DNN is trained with supervised data to classify fixed-size image patches into "trail" and "non-trail" categories, and reshaped to a fully convolutional architecture to produce trail segmentation map for arbitrary-sized input images. As trail and non-trail patches do not exhibit clearly defined shapes or forms, the patch-based classifier is prone to misclassification, and produces sub-optimal trail segmentation maps. Dynamic programming is introduced to find an optimal trail on the sub-optimal DNN output map. Experimental results showing accurate trail detection for real-world trail datasets captured with a head mounted vision system are presented.
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.
A comparison of multiprocessor scheduling methods for iterative data flow architectures
NASA Technical Reports Server (NTRS)
Storch, Matthew
1993-01-01
A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.
Genotyping by Sequencing in Almond: SNP Discovery, Linkage Mapping, and Marker Design
Goonetilleke, Shashi N.; March, Timothy J.; Wirthensohn, Michelle G.; Arús, Pere; Walker, Amanda R.; Mather, Diane E.
2017-01-01
In crop plant genetics, linkage maps provide the basis for the mapping of loci that affect important traits and for the selection of markers to be applied in crop improvement. In outcrossing species such as almond (Prunus dulcis Mill. D. A. Webb), application of a double pseudotestcross mapping approach to the F1 progeny of a biparental cross leads to the construction of a linkage map for each parent. Here, we report on the application of genotyping by sequencing to discover and map single nucleotide polymorphisms in the almond cultivars “Nonpareil” and “Lauranne.” Allele-specific marker assays were developed for 309 tag pairs. Application of these assays to 231 Nonpareil × Lauranne F1 progeny provided robust linkage maps for each parent. Analysis of phenotypic data for shell hardness demonstrated the utility of these maps for quantitative trait locus mapping. Comparison of these maps to the peach genome assembly confirmed high synteny and collinearity between the peach and almond genomes. The marker assays were applied to progeny from several other Nonpareil crosses, providing the basis for a composite linkage map of Nonpareil. Applications of the assays to a panel of almond clones and a panel of rootstocks used for almond production demonstrated the broad applicability of the markers and provide subsets of markers that could be used to discriminate among accessions. The sequence-based linkage maps and single nucleotide polymorphism assays presented here could be useful resources for the genetic analysis and genetic improvement of almond. PMID:29141988
NASA Astrophysics Data System (ADS)
Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza
2017-06-01
Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.
Saturation pulse design for quantitative myocardial T1 mapping.
Chow, Kelvin; Kellman, Peter; Spottiswoode, Bruce S; Nielles-Vallespin, Sonia; Arai, Andrew E; Salerno, Michael; Thompson, Richard B
2015-10-01
Quantitative saturation-recovery based T1 mapping sequences are less sensitive to systematic errors than the Modified Look-Locker Inversion recovery (MOLLI) technique but require high performance saturation pulses. We propose to optimize adiabatic and pulse train saturation pulses for quantitative T1 mapping to have <1 % absolute residual longitudinal magnetization (|MZ/M0|) over ranges of B0 and [Formula: see text] (B1 scale factor) inhomogeneity found at 1.5 T and 3 T. Design parameters for an adiabatic BIR4-90 pulse were optimized for improved performance within 1.5 T B0 (±120 Hz) and [Formula: see text] (0.7-1.0) ranges. Flip angles in hard pulse trains of 3-6 pulses were optimized for 1.5 T and 3 T, with consideration of T1 values, field inhomogeneities (B0 = ±240 Hz and [Formula: see text]=0.4-1.2 at 3 T), and maximum achievable B1 field strength. Residual MZ/M0 was simulated and measured experimentally for current standard and optimized saturation pulses in phantoms and in-vivo human studies. T1 maps were acquired at 3 T in human subjects and a swine using a SAturation recovery single-SHot Acquisition (SASHA) technique with a standard 90°-90°-90° and an optimized 6-pulse train. Measured residual MZ/M0 in phantoms had excellent agreement with simulations over a wide range of B0 and [Formula: see text]. The optimized BIR4-90 reduced the maximum residual |MZ/M0| to <1 %, a 5.8× reduction compared to a reference BIR4-90. An optimized 3-pulse train achieved a maximum residual |MZ/M0| <1 % for the 1.5 T optimization range compared to 11.3 % for a standard 90°-90°-90° pulse train, while a 6-pulse train met this target for the wider 3 T ranges of B0 and [Formula: see text]. The 6-pulse train demonstrated more uniform saturation across both the myocardium and entire field of view than other saturation pulses in human studies. T1 maps were more spatially homogeneous with 6-pulse train SASHA than the reference 90°-90°-90° SASHA in both human and animal studies. Adiabatic and pulse train saturation pulses optimized for different constraints found at 1.5 T and 3 T achieved <1 % residual |MZ/M0| in phantom experiments, enabling greater accuracy in quantitative saturation recovery T1 imaging.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lu; Aryal, Uma K.; Dai, Ziyu
2012-01-01
Protein glycosylation is known to play an essential role in both cellular functions and the secretory pathways; however, little information is available on the dynamics of glycosylated N-linked glycosites of fungi. Herein we present the first extensive mapping of glycosylated N-linked glycosites in industrial strain Aspergillus niger by applying an optimized solid phase enrichment of glycopeptide protocol using hydrazide modified magnetic beads. The enrichment protocol was initially optimized using mouse plasma and A. niger secretome samples, which was then applied to profile N-linked glycosites from both the secretome and whole cell lysates of A. niger. A total of 847 uniquemore » N-linked glycosites and 330 N-linked glycoproteins were confidently identified by LC-MS/MS. Based on gene ontology analysis, the identified N-linked glycoproteins in the whole cell lysate were primarily localized in the plasma membrane, endoplasmic reticulum, golgi apparatus, lysosome, and storage vacuoles. The identified N-linked glycoproteins are involved in a wide range of biological processes including gene regulation and signal transduction, protein folding and assembly, protein modification and carbohydrate metabolism. The extensive coverage of glycosylated N-linked glycosites along with identification of partial N-linked glycosylation in those enzymes involving in different biochemical pathways provide useful information for functional studies of N-linked glycosylation and their biotechnological applications in A. niger.« less
Reference frames in virtual spatial navigation are viewpoint dependent
Török, Ágoston; Nguyen, T. Peter; Kolozsvári, Orsolya; Buchanan, Robert J.; Nadasdy, Zoltan
2014-01-01
Spatial navigation in the mammalian brain relies on a cognitive map of the environment. Such cognitive maps enable us, for example, to take the optimal route from a given location to a known target. The formation of these maps is naturally influenced by our perception of the environment, meaning it is dependent on factors such as our viewpoint and choice of reference frame. Yet, it is unknown how these factors influence the construction of cognitive maps. Here, we evaluated how various combinations of viewpoints and reference frames affect subjects' performance when they navigated in a bounded virtual environment without landmarks. We measured both their path length and time efficiency and found that (1) ground perspective was associated with egocentric frame of reference, (2) aerial perspective was associated with allocentric frame of reference, (3) there was no appreciable performance difference between first and third person egocentric viewing positions and (4) while none of these effects were dependent on gender, males tended to perform better in general. Our study provides evidence that there are inherent associations between visual perspectives and cognitive reference frames. This result has implications about the mechanisms of path integration in the human brain and may also inspire designs of virtual reality applications. Lastly, we demonstrated the effective use of a tablet PC and spatial navigation tasks for studying spatial and cognitive aspects of human memory. PMID:25249956
Automated and Accurate Estimation of Gene Family Abundance from Shotgun Metagenomes
Nayfach, Stephen; Bradley, Patrick H.; Wyman, Stacia K.; Laurent, Timothy J.; Williams, Alex; Eisen, Jonathan A.; Pollard, Katherine S.; Sharpton, Thomas J.
2015-01-01
Shotgun metagenomic DNA sequencing is a widely applicable tool for characterizing the functions that are encoded by microbial communities. Several bioinformatic tools can be used to functionally annotate metagenomes, allowing researchers to draw inferences about the functional potential of the community and to identify putative functional biomarkers. However, little is known about how decisions made during annotation affect the reliability of the results. Here, we use statistical simulations to rigorously assess how to optimize annotation accuracy and speed, given parameters of the input data like read length and library size. We identify best practices in metagenome annotation and use them to guide the development of the Shotgun Metagenome Annotation Pipeline (ShotMAP). ShotMAP is an analytically flexible, end-to-end annotation pipeline that can be implemented either on a local computer or a cloud compute cluster. We use ShotMAP to assess how different annotation databases impact the interpretation of how marine metagenome and metatranscriptome functional capacity changes across seasons. We also apply ShotMAP to data obtained from a clinical microbiome investigation of inflammatory bowel disease. This analysis finds that gut microbiota collected from Crohn’s disease patients are functionally distinct from gut microbiota collected from either ulcerative colitis patients or healthy controls, with differential abundance of metabolic pathways related to host-microbiome interactions that may serve as putative biomarkers of disease. PMID:26565399
Reference frames in virtual spatial navigation are viewpoint dependent.
Török, Agoston; Nguyen, T Peter; Kolozsvári, Orsolya; Buchanan, Robert J; Nadasdy, Zoltan
2014-01-01
Spatial navigation in the mammalian brain relies on a cognitive map of the environment. Such cognitive maps enable us, for example, to take the optimal route from a given location to a known target. The formation of these maps is naturally influenced by our perception of the environment, meaning it is dependent on factors such as our viewpoint and choice of reference frame. Yet, it is unknown how these factors influence the construction of cognitive maps. Here, we evaluated how various combinations of viewpoints and reference frames affect subjects' performance when they navigated in a bounded virtual environment without landmarks. We measured both their path length and time efficiency and found that (1) ground perspective was associated with egocentric frame of reference, (2) aerial perspective was associated with allocentric frame of reference, (3) there was no appreciable performance difference between first and third person egocentric viewing positions and (4) while none of these effects were dependent on gender, males tended to perform better in general. Our study provides evidence that there are inherent associations between visual perspectives and cognitive reference frames. This result has implications about the mechanisms of path integration in the human brain and may also inspire designs of virtual reality applications. Lastly, we demonstrated the effective use of a tablet PC and spatial navigation tasks for studying spatial and cognitive aspects of human memory.
More on the Best Evolutionary Rate for Phylogenetic Analysis
Massingham, Tim; Goldman, Nick
2017-01-01
Abstract The accumulation of genome-scale molecular data sets for nonmodel taxa brings us ever closer to resolving the tree of life of all living organisms. However, despite the depth of data available, a number of studies that each used thousands of genes have reported conflicting results. The focus of phylogenomic projects must thus shift to more careful experimental design. Even though we still have a limited understanding of what are the best predictors of the phylogenetic informativeness of a gene, there is wide agreement that one key factor is its evolutionary rate; but there is no consensus as to whether the rates derived as optimal in various analytical, empirical, and simulation approaches have any general applicability. We here use simulations to infer optimal rates in a set of realistic phylogenetic scenarios with varying tree sizes, numbers of terminals, and tree shapes. Furthermore, we study the relationship between the optimal rate and rate variation among sites and among lineages. Finally, we examine how well the predictions made by a range of experimental design methods correlate with the observed performance in our simulations. We find that the optimal level of divergence is surprisingly robust to differences in taxon sampling and even to among-site and among-lineage rate variation as often encountered in empirical data sets. This finding encourages the use of methods that rely on a single optimal rate to predict a gene’s utility. Focusing on correct recovery either of the most basal node in the phylogeny or of the entire topology, the optimal rate is about 0.45 substitutions from root to tip in average Yule trees and about 0.2 in difficult trees with short basal and long-apical branches, but all rates leading to divergence levels between about 0.1 and 0.5 perform reasonably well. Testing the performance of six methods that can be used to predict a gene’s utility against our simulation results, we find that the probability of resolution, signal-noise analysis, and Fisher information are good predictors of phylogenetic informativeness, but they require specification of at least part of a model tree. Likelihood quartet mapping also shows very good performance but only requires sequence alignments and is thus applicable without making assumptions about the phylogeny. Despite them being the most commonly used methods for experimental design, geometric quartet mapping and the integration of phylogenetic informativeness curves perform rather poorly in our comparison. Instead of derived predictors of phylogenetic informativeness, we suggest that the number of sites in a gene that evolve at near-optimal rates (as inferred here) could be used directly to prioritize genes for phylogenetic inference. In combination with measures of model fit, especially with respect to compositional biases and among-site and among-lineage rate variation, such an approach has the potential to greatly improve marker choice and should be tested on empirical data. PMID:28595363
Quantitative microstructural imaging by scanning Laue x-ray micro- and nanodiffraction
Chen, Xian; Dejoie, Catherine; Jiang, Tengfei; ...
2016-06-08
We present that local crystal structure, crystal orientation, and crystal deformation can all be probed by Laue diffraction using a submicron x-ray beam. This technique, employed at a synchrotron facility, is particularly suitable for fast mapping the mechanical and microstructural properties of inhomogeneous multiphase polycrystalline samples, as well as imperfect epitaxial films or crystals. As synchrotron Laue x-ray microdiffraction enters its 20th year of existence and new synchrotron nanoprobe facilities are being built and commissioned around the world, we take the opportunity to overview current capabilities as well as the latest technical developments. Fast data collection provided by state-of-the-art areamore » detectors and fully automated pattern indexing algorithms optimized for speed make it possible to map large portions of a sample with fine step size and obtain quantitative images of its microstructure in near real time. Lastly, we extrapolate how the technique is anticipated to evolve in the near future and its potential emerging applications at a free-electron laser facility.« less
Finding theory- and evidence-based alternatives to fear appeals: Intervention Mapping.
Kok, Gerjo; Bartholomew, L Kay; Parcel, Guy S; Gottlieb, Nell H; Fernández, María E
2014-04-01
Fear arousal-vividly showing people the negative health consequences of life-endangering behaviors-is popular as a method to raise awareness of risk behaviors and to change them into health-promoting behaviors. However, most data suggest that, under conditions of low efficacy, the resulting reaction will be defensive. Instead of applying fear appeals, health promoters should identify effective alternatives to fear arousal by carefully developing theory- and evidence-based programs. The Intervention Mapping (IM) protocol helps program planners to optimize chances for effectiveness. IM describes the intervention development process in six steps: (1) assessing the problem and community capacities, (2) specifying program objectives, (3) selecting theory-based intervention methods and practical applications, (4) designing and organizing the program, (5) planning, adoption, and implementation, and (6) developing an evaluation plan. Authors who used IM indicated that it helped in bringing the development of interventions to a higher level. © 2013 The Authors. International Journal of Psychology published by John Wiley © Sons Ltd on behalf of International Union of Psychological Science.
Rendering of HDR content on LDR displays: an objective approach
NASA Astrophysics Data System (ADS)
Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick
2015-09-01
Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.
Fast microwave assisted pyrolysis of biomass using microwave absorbent.
Borges, Fernanda Cabral; Du, Zhenyi; Xie, Qinglong; Trierweiler, Jorge Otávio; Cheng, Yanling; Wan, Yiqin; Liu, Yuhuan; Zhu, Rongbi; Lin, Xiangyang; Chen, Paul; Ruan, Roger
2014-03-01
A novel concept of fast microwave assisted pyrolysis (fMAP) in the presence of microwave absorbents was presented and examined. Wood sawdust and corn stover were pyrolyzed by means of microwave heating and silicon carbide (SiC) as microwave absorbent. The bio-oil was characterized, and the effects of temperature, feedstock loading, particle sizes, and vacuum degree were analyzed. For wood sawdust, a temperature of 480°C, 50 grit SiC, with 2g/min of biomass feeding, were the optimal conditions, with a maximum bio-oil yield of 65 wt.%. For corn stover, temperatures ranging from 490°C to 560°C, biomass particle sizes from 0.9mm to 1.9mm, and vacuum degree lower than 100mmHg obtained a maximum bio-oil yield of 64 wt.%. This study shows that the use of microwave absorbents for fMAP is feasible and a promising technology to improve the practical values and commercial application outlook of microwave based pyrolysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Database resources of the National Center for Biotechnology Information: 2002 update
Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.
2002-01-01
In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, Human¡VMouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. PMID:11752242
Raman lidar characterization using a reference lamp
NASA Astrophysics Data System (ADS)
Landulfo, Eduardo; da Costa, Renata F.; Rodrigues, Patricia F.; da Silva Lopes, Fábio J.
2014-10-01
The determination of the amount of water vapor in the atmosphere using lidar is a calibration dependent technique. Different collocated instruments are used for this purpose, like radiossoundings and microwave radiometers. When there are no collocated instruments available, an independente lamp mapping calibration technique can be used. Aiming to stabilish an independ technique for the calibration of the six channels Nd-YAG Raman lidar system located at the Center for Lasers and Applications (CLA), S˜ao Paulo, Brazil, an optical characterization of the system was first performed using a reference tungsten lamp. This characterization is useful to identify any possible distortions in the interference filters, telescope mirror and stray light contamination. In this paper we show three lamp mapping caracterizations (01/16/2014, 01/22/2014, 04/09/2014). The first day is used to demostrate how the tecnique is useful to detect stray light, the second one how it is sensible to the position of the filters and the third one demostrates a well optimized optical system.
Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.
Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun
2017-07-01
In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.
Conversion of a Capture ELISA to a Luminex xMAP Assay using a Multiplex Antibody Screening Method
Baker, Harold N.; Murphy, Robin; Lopez, Erica; Garcia, Carlos
2012-01-01
The enzyme-linked immunosorbent assay (ELISA) has long been the primary tool for detection of analytes of interest in biological samples for both life science research and clinical diagnostics. However, ELISA has limitations. It is typically performed in a 96-well microplate, and the wells are coated with capture antibody, requiring a relatively large amount of sample to capture an antigen of interest . The large surface area of the wells and the hydrophobic binding of capture antibody can also lead to non-specific binding and increased background. Additionally, most ELISAs rely upon enzyme-mediated amplification of signal in order to achieve reasonable sensitivity. Such amplification is not always linear and can thus skew results. In the past 15 years, a new technology has emerged that offers the benefits of the ELISA, but also enables higher throughput, increased flexibility, reduced sample volume, and lower cost, with a similar workflow 1, 2. Luminex xMAP Technology is a microsphere (bead) array platform enabling both monoplex and multiplex assays that can be applied to both protein and nucleic acid applications 3-5. The beads have the capture antibody covalently immobilized on a smaller surface area, requiring less capture antibody and smaller sample volumes, compared to ELISA, and non-specific binding is significantly reduced. Smaller sample volumes are important when working with limiting samples such as cerebrospinal fluid, synovial fluid, etc. 6. Multiplexing the assay further reduces sample volume requirements, enabling multiple results from a single sample. Recent improvements by Luminex include: the new MAGPIX system, a smaller, less expensive, easier-to-use analyzer; Low-Concentration Magnetic MagPlex Microspheres which eliminate the need for expensive filter plates and come in a working concentration better suited for assay development and low-throughput applications; and the xMAP Antibody Coupling (AbC) Kit, which includes a protocol, reagents, and consumables necessary for coupling beads to the capture antibody of interest. (See Materials section for a detailed list of kit contents.) In this experiment, we convert a pre-optimized ELISA assay for TNF-alpha cytokine to the xMAP platform and compare the performance of the two methods 7-11. TNF-alpha is a biomarker used in the measurement of inflammatory responses in patients with autoimmune disorders. We begin by coupling four candidate capture antibodies to four different microsphere sets or regions. When mixed together, these four sets allow for the simultaneous testing of all four candidates with four separate detection antibodies to determine the best antibody pair, saving reagents, sample and time. Two xMAP assays are then constructed with the two most optimal antibody pairs and their performance is compared to that of the original ELISA assay in regards to signal strength, dynamic range, and sensitivity. PMID:22806215
Chuang, Li-Yeh; Moi, Sin-Hua; Lin, Yu-Da; Yang, Cheng-Hong
2016-10-01
Evolutionary algorithms could overcome the computational limitations for the statistical evaluation of large datasets for high-order single nucleotide polymorphism (SNP) barcodes. Previous studies have proposed several chaotic particle swarm optimization (CPSO) methods to detect SNP barcodes for disease analysis (e.g., for breast cancer and chronic diseases). This work evaluated additional chaotic maps combined with the particle swarm optimization (PSO) method to detect SNP barcodes using a high-dimensional dataset. Nine chaotic maps were used to improve PSO method results and compared the searching ability amongst all CPSO methods. The XOR and ZZ disease models were used to compare all chaotic maps combined with PSO method. Efficacy evaluations of CPSO methods were based on statistical values from the chi-square test (χ 2 ). The results showed that chaotic maps could improve the searching ability of PSO method when population are trapped in the local optimum. The minor allele frequency (MAF) indicated that, amongst all CPSO methods, the numbers of SNPs, sample size, and the highest χ 2 value in all datasets were found in the Sinai chaotic map combined with PSO method. We used the simple linear regression results of the gbest values in all generations to compare the all methods. Sinai chaotic map combined with PSO method provided the highest β values (β≥0.32 in XOR disease model and β≥0.04 in ZZ disease model) and the significant p-value (p-value<0.001 in both the XOR and ZZ disease models). The Sinai chaotic map was found to effectively enhance the fitness values (χ 2 ) of PSO method, indicating that the Sinai chaotic map combined with PSO method is more effective at detecting potential SNP barcodes in both the XOR and ZZ disease models. Copyright © 2016 Elsevier B.V. All rights reserved.
Hybrid cloud and cluster computing paradigms for life science applications
2010-01-01
Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982
Hybrid cloud and cluster computing paradigms for life science applications.
Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey
2010-12-21
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.
Rapid quantitative chemical mapping of surfaces with sub-2 nm resolution
NASA Astrophysics Data System (ADS)
Lai, Chia-Yun; Perri, Saverio; Santos, Sergio; Garcia, Ricardo; Chiesa, Matteo
2016-05-01
We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems.We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00496b
Implementation of Open-Source Web Mapping Technologies to Support Monitoring of Governmental Schemes
NASA Astrophysics Data System (ADS)
Pulsani, B. R.
2015-10-01
Several schemes are undertaken by the government to uplift social and economic condition of people. The monitoring of these schemes is done through information technology where involvement of Geographic Information System (GIS) is lacking. To demonstrate the benefits of thematic mapping as a tool for assisting the officials in making decisions, a web mapping application for three government programs such as Mother and Child Tracking system (MCTS), Telangana State Housing Corporation Limited (TSHCL) and Ground Water Quality Mapping (GWQM) has been built. Indeed the three applications depicted the distribution of various parameters thematically and helped in identifying the areas with higher and weaker distributions. Based on the three applications, the study tends to find similarities of many government schemes reflecting the nature of thematic mapping and hence deduces to implement this kind of approach for other schemes as well. These applications have been developed using SharpMap Csharp library which is a free and open source mapping library for developing geospatial applications. The study highlights upon the cost benefits of SharpMap and brings out the advantage of this library over proprietary vendors and further discusses its advantages over other open source libraries as well.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
NASA Astrophysics Data System (ADS)
Wei, Xialu
In this study, the spark plasma sintering (SPS) is employed to consolidate poorly sinter-able ultra-high temperature ceramic (UHTC) powders due to the fact that the conjoint application of electric current and mechanical pressure during SPS can largely offset the required processing temperature. Zirconium carbide (ZrC) is selected as target material as it broadly represents properties of typical UHTCs. Investigations on SPS of ZrC are concurrently conducted in two correlated regimes: One regime is used to optimize the SPS densification efficiency by manipulating the loading schematics. The other regime is used to produce complex shape carbide components for high temperature applications via SPS. Both theoretical and experimental studies are involved in the achievement of the formulated research objectives. Consolidation of ZrC has been carried out to form a densification map with determining the optimal processing parameters. The densification of ZrC is studied through the continuum theory of sintering, in which the ZrC power-law creep parameters have been determined through the clarification of electrical and thermal aspects of the employed SPS system. Then the SPS-forging setup is proposed as it is theoretically and experimentally proven to be able to render more densification than the regular SPS. SPS-forging and regular SPS are eventually integrated into a hybrid loading mode SPS regime to combine the advantages of the individual setups to obtain the optimal densification kinetics. Annular shape ZrC pellets have been fabricated using SPS. Finite element modeling framework is constructed to manifest the thermomechanical interactions during the SPS of annular shape ZrC specimens. The fabrication procedures are practically adapted to produce also annular shape carbide composites with excellent high temperature structural strength being used as alternative SPS tooling components. The applicability of annular shape fuel pellet to accommodate volume swelling under its service conditions is investigated. The irradiation-induced swelling phenomena are analyzed by analytical modeling and finite element simulations, in which the generated fission products are considered to be the sources of the fuel pellet swelling.
2008-03-01
multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space
Hardt, Oliver; Nadel, Lynn
2009-01-01
Cognitive map theory suggested that exploring an environment and attending to a stimulus should lead to its integration into an allocentric environmental representation. We here report that directed attention in the form of exploration serves to gather information needed to determine an optimal spatial strategy, given task demands and characteristics of the environment. Attended environmental features may integrate into spatial representations if they meet the requirements of the optimal spatial strategy: when learning involves a cognitive mapping strategy, cues with high codability (e.g., concrete objects) will be incorporated into a map, but cues with low codability (e.g., abstract paintings) will not. However, instructions encouraging map learning can lead to the incorporation of cues with low codability. On the other hand, if spatial learning is not map-based, abstract cues can and will be used to encode locations. Since exploration appears to determine what strategy to apply and whether or not to encode a cue, recognition memory for environmental features is independent of whether or not a cue is part of a spatial representation. In fact, when abstract cues were used in a way that was not map-based, or when they were not used for spatial navigation at all, they were nevertheless recognized as familiar. Thus, the relation between exploratory activity on the one hand and spatial strategy and memory on the other appears more complex than initially suggested by cognitive map theory.
Bae, Sung Jin; Jang, Sung Ho; Seo, Jeong Pyo; Chang, Pyung Hun
2017-07-01
The optimal conditions inducing proper brain activation during performance of rehabilitation robots should be examined to enhance the efficiency of robot rehabilitation based on the concept of brain plasticity. In this study, we attempted to investigate differences in cortical activation according to the speeds of passive wrist movements performed by a rehabilitation robot for stroke patients. 9 stroke patients with right hemiparesis participated in this study. Passive movements of the affected wrist were performed by the rehabilitation robot at three different speeds: 0.25 Hz; slow, 0.5Hz; moderate and 0.75 Hz; fast. We used functional near-infrared spectroscopy to measure the brain activity during the passive movements performed by a robot. Group-average activation map and the relative changes in oxy-hemoglobin (ΔOxyHb) in two regions of interest: the primary sensory-motor cortex (SM1); premotor area (PMA) and region of all channels were measured. In the result of group-averaged activation map, the contralateral SM1, PMA and somatosensory association cortex (SAC) showed the greatest significant activation according to the movements at 0.75 Hz, while there is no significantly activated area at 0.5 Hz. Regarding ΔOxyHb, no significant diiference was observed among three speeds regardless of region. In conclusion, the contralateral SM1, PMA and SAC showed the greatest activation by a fast speed (0.75 Hz) rather than slow (0.25 Hz) and moderate (0. 5 Hz) speed. Our results suggest an optimal speed for execution of the wrist rehabilitation robot. Therefore, we believe that our findings might point to several promising applications for future research regarding useful and empirically-based robot rehabilitation therapy.
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui
2012-01-01
Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.
Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz
2012-09-24
The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.
NASA Astrophysics Data System (ADS)
Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd
2018-01-01
The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.
D-Optimal Experimental Design for Contaminant Source Identification
NASA Astrophysics Data System (ADS)
Sai Baba, A. K.; Alexanderian, A.
2016-12-01
Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Economopoulou, M.A.; Economopoulou, A.A.; Economopoulos, A.P., E-mail: eco@otenet.gr
2013-11-15
Highlights: • A two-step (strategic and detailed optimal planning) methodology is used for solving complex MSW management problems. • A software package is outlined, which can be used for generating detailed optimal plans. • Sensitivity analysis compares alternative scenarios that address objections and/or wishes of local communities. • A case study shows the application of the above procedure in practice and demonstrates the results and benefits obtained. - Abstract: The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/ormore » wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to: (a) serve 113 Municipalities and Communities that generate nearly 2 million t/y of comingled MSW with distinctly different waste collection patterns, (b) take into consideration several existing waste transfer stations (WTS) and optimize their use within the overall plan, (c) select the most appropriate sites among the potentially suitable (new and in use) ones, (d) generate the optimal profile of each WTS proposed, and (e) perform sensitivity analysis so as to define the impact of selected sets of constraints (limitations in the availability of sites and in the capacity of their installations) on the design and cost of the ensuing optimal waste transfer system. The results show that optimal planning offers significant economic savings to municipalities, while reducing at the same time the present levels of traffic, fuel consumptions and air emissions in the congested Athens basin.« less
Genotyping by Sequencing in Almond: SNP Discovery, Linkage Mapping, and Marker Design.
Goonetilleke, Shashi N; March, Timothy J; Wirthensohn, Michelle G; Arús, Pere; Walker, Amanda R; Mather, Diane E
2018-01-04
In crop plant genetics, linkage maps provide the basis for the mapping of loci that affect important traits and for the selection of markers to be applied in crop improvement. In outcrossing species such as almond ( Prunus dulcis Mill. D. A. Webb), application of a double pseudotestcross mapping approach to the F 1 progeny of a biparental cross leads to the construction of a linkage map for each parent. Here, we report on the application of genotyping by sequencing to discover and map single nucleotide polymorphisms in the almond cultivars "Nonpareil" and "Lauranne." Allele-specific marker assays were developed for 309 tag pairs. Application of these assays to 231 Nonpareil × Lauranne F 1 progeny provided robust linkage maps for each parent. Analysis of phenotypic data for shell hardness demonstrated the utility of these maps for quantitative trait locus mapping. Comparison of these maps to the peach genome assembly confirmed high synteny and collinearity between the peach and almond genomes. The marker assays were applied to progeny from several other Nonpareil crosses, providing the basis for a composite linkage map of Nonpareil. Applications of the assays to a panel of almond clones and a panel of rootstocks used for almond production demonstrated the broad applicability of the markers and provide subsets of markers that could be used to discriminate among accessions. The sequence-based linkage maps and single nucleotide polymorphism assays presented here could be useful resources for the genetic analysis and genetic improvement of almond. Copyright © 2018 Goonetilleke et al.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Agrolandscape Research of Geosystems in the South of Central Siberia
NASA Astrophysics Data System (ADS)
Lysanova, G.; Soja, A. J.
2012-12-01
Minusinskaya basin, the area under research, is situated in the south of Central Siberia and is an agrarian region, which differs from another territories of Siberia. The territory provides for foodstuff not only its population but another regions as well. Nature-climate conditions favour the development of agriculture and cattle-breeding. Complex geographical study of rural lands, which is implemented by two approaches: a natural and industrial system block is necessary for rational use of agrolandscapes. Agrolandscapes are objects for rationalization of land management in agricultural regions. From our point of view application of a landscape map as a base for working out of agrolandscape map (Fig. 1a) and a map of agronatural potential of geosystems (Fig. 2), gives an opportunity to take stock of reserves of agricultural lands not only in quantitative but qualitative respects and also to determine the ways of optimal transformation of arable lands depending on nature conditions of regions and their development. Landscape maps that reflect differentiation of not only natural formations, changed by anthropogenious influence and also natural analogues, concern to a number of important tools of planning for optimal land use. The main principles of working out of typological landscape map of a medium scale aroused from targets and tasks of agrolandscape estimation of the territory [1]. The landscape map was worked out according to V.A. Nikolaev's methodology [2]: types of landscapes correlated with types of lands use, composition of cereals in rotation of crops, agro-techniques, crop capacity, climate indices, etc. Existing natural-agricultural systems are shown in the map. Their characteristics includes information about natural and agricultural blocks. Agronatural potential had been calculated by summarize estimations of its component parts. As a result of these calculations 30 arable agrolandscapes, marked out into the landscape map, were joined according to summ of points into 3 groups of agrolandscapes, which have high, medium and low medium agronatural potential. Thus the typological landscape and agrolandscape medium scale map had been worked out, estimation of agronatural potential, and the map had been worked out on the base of detailed agrolandscape research and study of natural geosystems of Minusinskaya Basin. Consideration of agronatural potential of a territory helps to determine regions of perspective development of its separate types, proceeding from presents of natural and economical preconditions. Successful development of agriculture is mainly connected with the right agrolandscape use. That is why the optimum variant of land use could be and should be found with the definite ratio of transformational organizational-economical and adaptive landscape-ecological measures, which could allow abruptly increase the potential of their self-regulation. REFERENCES: 1. G. I. Lysanova landscape analyses of agronatural potential of geosystems. - Irkutsk, 2001. - 187 p. 2. V. A. Nikolaev Regional agrolandscape research // Natural complexes and agriculture. - Voprosy geografii. - M.: Mysl, 1984. - Coll. 124. - P. 73-83.
Recent advances in sequence assembly: principles and applications.
Chen, Qingfeng; Lan, Chaowang; Zhao, Liang; Wang, Jianxin; Chen, Baoshan; Chen, Yi-Ping Phoebe
2017-11-01
The application of advanced sequencing technologies and the rapid growth of various sequence data have led to increasing interest in DNA sequence assembly. However, repeats and polymorphism occur frequently in genomes, and each of these has different impacts on assembly. Further, many new applications for sequencing, such as metagenomics regarding multiple species, have emerged in recent years. These not only give rise to higher complexity but also prevent short-read assembly in an efficient way. This article reviews the theoretical foundations that underlie current mapping-based assembly and de novo-based assembly, and highlights the key issues and feasible solutions that need to be considered. It focuses on how individual processes, such as optimal k-mer determination and error correction in assembly, rely on intelligent strategies or high-performance computation. We also survey primary algorithms/software and offer a discussion on the emerging challenges in assembly. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Biocatalytic Synthesis of the Rare Sugar Kojibiose: Process Scale-Up and Application Testing.
Beerens, Koen; De Winter, Karel; Van de Walle, Davy; Grootaert, Charlotte; Kamiloglu, Senem; Miclotte, Lisa; Van de Wiele, Tom; Van Camp, John; Dewettinck, Koen; Desmet, Tom
2017-07-26
Cost-efficient (bio)chemical production processes are essential to evaluate the commercial and industrial applications of promising carbohydrates and also are essential to ensure economically viable production processes. Here, the synthesis of the naturally occurring disaccharide kojibiose (2-O-α-d-glucopyranosyl-d-glucopyranoside) was evaluated using different Bifidobacterium adolescentis sucrose phosphorylase variants. Variant L341I_Q345S was found to efficiently synthesize kojibiose while remaining fully active after 1 week of incubation at 55 °C. Process optimization allowed kojibiose production at the kilogram scale, and simple but efficient downstream processing, using a yeast treatment and crystallization, resulted in more than 3 kg of highly pure crystalline kojibiose (99.8%). These amounts allowed a deeper characterization of its potential in food applications. It was found to have possible beneficial health effects, including delayed glucose release and potential to trigger SCFA production. Finally, we compared the bulk functionality of highly pure kojibiose to that of sucrose, hereby mapping its potential as a new sweetener in confectionery products.
Performance study of a data flow architecture
NASA Technical Reports Server (NTRS)
Adams, George
1985-01-01
Teams of scientists studied data flow concepts, static data flow machine architecture, and the VAL language. Each team mapped its application onto the machine and coded it in VAL. The principal findings of the study were: (1) Five of the seven applications used the full power of the target machine. The galactic simulation and multigrid fluid flow teams found that a significantly smaller version of the machine (16 processing elements) would suffice. (2) A number of machine design parameters including processing element (PE) function unit numbers, array memory size and bandwidth, and routing network capability were found to be crucial for optimal machine performance. (3) The study participants readily acquired VAL programming skills. (4) Participants learned that application-based performance evaluation is a sound method of evaluating new computer architectures, even those that are not fully specified. During the course of the study, participants developed models for using computers to solve numerical problems and for evaluating new architectures. These models form the bases for future evaluation studies.
Malleable architecture generator for FPGA computing
NASA Astrophysics Data System (ADS)
Gokhale, Maya; Kaba, James; Marks, Aaron; Kim, Jang
1996-10-01
The malleable architecture generator (MARGE) is a tool set that translates high-level parallel C to configuration bit streams for field-programmable logic based computing systems. MARGE creates an application-specific instruction set and generates the custom hardware components required to perform exactly those computations specified by the C program. In contrast to traditional fixed-instruction processors, MARGE's dynamic instruction set creation provides for efficient use of hardware resources. MARGE processes intermediate code in which each operation is annotated by the bit lengths of the operands. Each basic block (sequence of straight line code) is mapped into a single custom instruction which contains all the operations and logic inherent in the block. A synthesis phase maps the operations comprising the instructions into register transfer level structural components and control logic which have been optimized to exploit functional parallelism and function unit reuse. As a final stage, commercial technology-specific tools are used to generate configuration bit streams for the desired target hardware. Technology- specific pre-placed, pre-routed macro blocks are utilized to implement as much of the hardware as possible. MARGE currently supports the Xilinx-based Splash-2 reconfigurable accelerator and National Semiconductor's CLAy-based parallel accelerator, MAPA. The MARGE approach has been demonstrated on systolic applications such as DNA sequence comparison.
Globally optimal superconducting magnets part II: symmetric MSE coil arrangement.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
A globally optimal superconducting magnet coil design procedure based on the Minimum Stored Energy (MSE) current density map is outlined. The method has the ability to arrange coils in a manner that generates a strong and homogeneous axial magnetic field over a predefined region, and ensures the stray field external to the assembly and peak magnetic field at the wires are in acceptable ranges. The outlined strategy of allocating coils within a given domain suggests that coils should be placed around the perimeter of the domain with adjacent coils possessing alternating winding directions for optimum performance. The underlying current density maps from which the coils themselves are derived are unique, and optimized to possess minimal stored energy. Therefore, the method produces magnet designs with the lowest possible overall stored energy. Optimal coil layouts are provided for unshielded and shielded short bore symmetric superconducting magnets.
Mapping PetaSHA Applications to TeraGrid Architectures
NASA Astrophysics Data System (ADS)
Cui, Y.; Moore, R.; Olsen, K.; Zhu, J.; Dalguer, L. A.; Day, S.; Cruz-Atienza, V.; Maechling, P.; Jordan, T.
2007-12-01
The Southern California Earthquake Center (SCEC) has a science program in developing an integrated cyberfacility - PetaSHA - for executing physics-based seismic hazard analysis (SHA) computations. The NSF has awarded PetaSHA 15 million allocation service units this year on the fastest supercomputers available within the NSF TeraGrid. However, one size does not fit all, a range of systems are needed to support this effort at different stages of the simulations. Enabling PetaSHA simulations on those TeraGrid architectures to solve both dynamic rupture and seismic wave propagation have been a challenge from both hardware and software levels. This is an adaptation procedure to meet specific requirements of each architecture. It is important to determine how fundamental system attributes affect application performance. We present an adaptive approach in our PetaSHA application that enables the simultaneous optimization of both computation and communication at run-time using flexible settings. These techniques optimize initialization, source/media partition and MPI-IO output in different ways to achieve optimal performance on the target machines. The resulting code is a factor of four faster than the orignial version. New MPI-I/O capabilities have been added for the accurate Staggered-Grid Split-Node (SGSN) method for dynamic rupture propagation in the velocity-stress staggered-grid finite difference scheme (Dalguer and Day, JGR, 2007), We use execution workflow across TeraGrid sites for managing the resulting data volumes. Our lessons learned indicate that minimizing time to solution is most critical, in particular when scheduling large scale simulations across supercomputer sites. The TeraShake platform has been ported to multiple architectures including TACC Dell lonestar and Abe, Cray XT3 Bigben and Blue Gene/L. Parallel efficiency of 96% with the PetaSHA application Olsen-AWM has been demonstrated on 40,960 Blue Gene/L processors at IBM TJ Watson Center. Notable accomplishments using the optimized code include the M7.8 ShakeOut rupture scenario, as part of the southern San Andreas Fault evaluation SoSAFE. The ShakeOut simulation domain is the same as used for the SCEC TeraShake simulations (600 km by 300 km by 80 km). However, the higher resolution of 100 m with frequency content up to 1 Hz required 14.4 billion grid points, eight times more than the TeraShake scenarios. The simulation used 2000 TACC Dell linux Lonestar processors and took 56 hours to compute 240 seconds of wave propagation. The pre-processing input partition, as well as post-processing analysis has been performed on the SDSC IBM Datastar p655 and p690. In addition, as part of the SCEC DynaShake computational platform, the SGSN capability was used to model dynamic rupture propagation for the ShakeOut scenario that match the proposed surface slip and size of the event. Mapping applications to different architectures require coordination of many areas of expertise in hardware and application level, an outstanding challenge faced on the current petascale computing effort. We believe our techniques as well as distributed data management through data grids have provided a practical example of how to effectively use multiple compute resources, and our results will benefit other geoscience disciplines as well.
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.
NASA Astrophysics Data System (ADS)
Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.
2016-12-01
The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H
2012-11-06
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.
2013-01-01
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2016-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2017-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. PMID:28167896
The Greek National Observatory of Forest Fires (NOFFi)
NASA Astrophysics Data System (ADS)
Tompoulidou, Maria; Stefanidou, Alexandra; Grigoriadis, Dionysios; Dragozi, Eleni; Stavrakoudis, Dimitris; Gitas, Ioannis Z.
2016-08-01
Efficient forest fire management is a key element for alleviating the catastrophic impacts of wildfires. Overall, the effective response to fire events necessitates adequate planning and preparedness before the start of the fire season, as well as quantifying the environmental impacts in case of wildfires. Moreover, the estimation of fire danger provides crucial information required for the optimal allocation and distribution of the available resources. The Greek National Observatory of Forest Fires (NOFFi)—established by the Greek Forestry Service in collaboration with the Laboratory of Forest Management and Remote Sensing of the Aristotle University of Thessaloniki and the International Balkan Center—aims to develop a series of modern products and services for supporting the efficient forest fire prevention management in Greece and the Balkan region, as well as to stimulate the development of transnational fire prevention and impacts mitigation policies. More specifically, NOFFi provides three main fire-related products and services: a) a remote sensing-based fuel type mapping methodology, b) a semi-automatic burned area mapping service, and c) a dynamically updatable fire danger index providing mid- to long-term predictions. The fuel type mapping methodology was developed and applied across the country, following an object-oriented approach and using Landsat 8 OLI satellite imagery. The results showcase the effectiveness of the generated methodology in obtaining highly accurate fuel type maps on a national level. The burned area mapping methodology was developed as a semi-automatic object-based classification process, carefully crafted to minimize user interaction and, hence, be easily applicable on a near real-time operational level as well as for mapping historical events. NOFFi's products can be visualized through the interactive Fire Forest portal, which allows the involvement and awareness of the relevant stakeholders via the Public Participation GIS (PPGIS) tool.
Pengra, Bruce; Johnston, C.A.; Loveland, Thomas R.
2007-01-01
Mapping tools are needed to document the location and extent of Phragmites australis, a tall grass that invades coastal marshes throughout North America, displacing native plant species and degrading wetland habitat. Mapping Phragmites is particularly challenging in the freshwater Great Lakes coastal wetlands due to dynamic lake levels and vegetation diversity. We tested the applicability of Hyperion hyperspectral satellite imagery for mapping Phragmites in wetlands of the west coast of Green Bay in Wisconsin, U.S.A. A reference spectrum created using Hyperion data from several pure Phragmites stands within the image was used with a Spectral Correlation Mapper (SCM) algorithm to create a raster map with values ranging from 0 to 1, where 0 represented the greatest similarity between the reference spectrum and the image spectrum and 1 the least similarity. The final two-class thematic classification predicted monodominant Phragmites covering 3.4% of the study area. Most of this was concentrated in long linear features parallel to the Green Bay shoreline, particularly in areas that had been under water only six years earlier when lake levels were 66??cm higher. An error matrix using spring 2005 field validation points (n = 129) showed good overall accuracy-81.4%. The small size and linear arrangement of Phragmites stands was less than optimal relative to the sensor resolution, and Hyperion's 30??m resolution captured few if any pure pixels. Contemporary Phragmites maps prepared with Hyperion imagery would provide wetland managers with a tool that they currently lack, which could aid attempts to stem the spread of this invasive species. ?? 2006 Elsevier Inc. All rights reserved.
Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig
2014-01-01
Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p < 0.05 for a t-test of the model coefficients. Accuracy was quantified by the proportion of estimates that were within 5 minutes of the actual transport times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web application to optimize emergency department resource use. Use of lights and sirens had a significant effect on transport times. PMID:23865736
Artificial neural systems for interpretation and inversion of seismic data
NASA Astrophysics Data System (ADS)
Calderon-Macias, Carlos
The goal of this work is to investigate the feasibility of using neural network (NN) models for solving geophysical exploration problems. First, a feedforward neural network (FNN) is used to solve inverse problems. The operational characteristics of a FNN are primarily controlled by a set of weights and a nonlinear function that performs a mapping between two sets of data. In a process known as training, the FNN weights are iteratively adjusted to perform the mapping. After training, the computed weights encode important features of the data that enable one pattern to be distinguished from another. Synthetic data computed from an ensemble of earth models and the corresponding models provide the training data. Two training methods are studied: the backpropagation method which is a gradient scheme, and a global optimization method called very fast simulated annealing (VFSA). A trained network is then used to predict models from new data (e.g., data from a new location) in a one-step procedure. The application of this method to the problems of obtaining formation resistivities and layer thicknesses from resistivity sounding data and 1D velocity models from seismic data shows that trained FNNs produce reasonably accurate earth models when observed data are input to the FNNs. In a second application, a FNN is used for automating the NMO correction process of seismic reflection data. The task of the FNN is to map CMP data at control locations along a seismic line into subsurface velocities. The network is trained while the velocity analyses are performed at the control locations. Once trained, the computed weights are used as an operator that acts on the remaining CMP data as a velocity interpolator, resulting in a fast method for NMO correction. The second part of this dissertation describes the application of a Hopfield neural network (HNN) to the problems of deconvolution and multiple attenuation. In these applications, the unknown parameters (reflection coefficients and source wavelet in the first problem and an operator in the second) are mapped as neurons of the HNN. The proposed deconvolution method attempts to reproduce the data with a limited number of events. The multiple attenuation method resembles the predictive deconvolution method. Results of this method are compared with a multiple elimination method based on estimating the source wavelet from the seismic data.
Hartmann-Hahn 2D-map to optimize the RAMP-CPMAS NMR experiment for pharmaceutical materials.
Suzuki, Kazuko; Martineau, Charlotte; Fink, Gerhard; Steuernagel, Stefan; Taulelle, Francis
2012-02-01
Cross polarization-magic angle spinning (CPMAS) is the most used experiment for solid-state NMR measurements in the pharmaceutical industry, with the well-known variant RAMP-CPMAS its dominant implementation. The experimental work presented in this contribution focuses on the entangled effects of the main parameters of such an experiment. The shape of the RAMP-CP pulse has been considered as well as the contact time duration, and a particular attention also has been devoted to the radio-frequency (RF) field inhomogeneity. (13)C CPMAS NMR spectra have been recorded with a systematic variation of (13)C and (1)H constant radiofrequency field pair values and represented as a Hartmann-Hahn matching two-dimensional map. Such a map yields a rational overview of the intricate optimal conditions necessary to achieve an efficient CP magnetization transfer. The map also highlights the effects of sweeping the RF by the RAMP-CP pulse on the number of Hartmann-Hahn matches crossed and how RF field inhomogeneity helps in increasing the CP efficiency by using a larger fraction of the sample. In the light of the results, strategies for optimal RAMP-CPMAS measurements are suggested, which lead to a much higher efficiency than constant amplitude CP experiment. Copyright © 2012 John Wiley & Sons, Ltd.
Efficient hash tables for network applications.
Zink, Thomas; Waldvogel, Marcel
2015-01-01
Hashing has yet to be widely accepted as a component of hard real-time systems and hardware implementations, due to still existing prejudices concerning the unpredictability of space and time requirements resulting from collisions. While in theory perfect hashing can provide optimal mapping, in practice, finding a perfect hash function is too expensive, especially in the context of high-speed applications. The introduction of hashing with multiple choices, d-left hashing and probabilistic table summaries, has caused a shift towards deterministic DRAM access. However, high amounts of rare and expensive high-speed SRAM need to be traded off for predictability, which is infeasible for many applications. In this paper we show that previous suggestions suffer from the false precondition of full generality. Our approach exploits four individual degrees of freedom available in many practical applications, especially hardware and high-speed lookups. This reduces the requirement of on-chip memory up to an order of magnitude and guarantees constant lookup and update time at the cost of only minute amounts of additional hardware. Our design makes efficient hash table implementations cheaper, more predictable, and more practical.
Kim, Jungsuk; Maitra, Raj D; Pedrotti, Ken; Dunbar, William B
2013-02-01
In this paper, we demonstrate the application of a novel current-measuring sensor (CMS) customized for nanopore applications. The low-noise CMS is fabricated in a 0.35μm CMOS process and is implemented in experiments involving DNA captured in an α-hemolysin (α-HL) nanopore. Specifically, the CMS is used to build a current amplitude map as a function of varying positions of a single-abasic residue within a homopolymer cytosine single-stranded DNA (ssDNA) that is captured and held in the pore. Each ssDNA is immobilized using a biotin-streptavidin linkage. Five different DNA templates are measured and compared: one all-cytosine ssDNA, and four with a single-abasic residue substitution that resides in or near the ~1.5nm aperture of the α-HL channel when the strand is immobilized. The CMOS CMS is shown to resolves the ~5Å displacements of the abasic residue within the varying templates. The demonstration represents an advance in application-specific circuitry that is optimized for small-footprint nanopore applications, including genomic sequencing.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Smart "geomorphological" map browsing - a tale about geomorphological maps and the internet
NASA Astrophysics Data System (ADS)
Geilhausen, M.; Otto, J.-C.
2012-04-01
With the digital production of geomorphological maps, the dissemination of research outputs now extends beyond simple paper products. Internet technologies can contribute to both, the dissemination of geomorphological maps and access to geomorphologic data and help to make geomorphological knowledge available to a greater public. Indeed, many national geological surveys employ end-to-end digital workflows from data capture in the field to final map production and dissemination. This paper deals with the potential of web mapping applications and interactive, portable georeferenced PDF maps for the distribution of geomorphological information. Web mapping applications such as Google Maps have become very popular and widespread and increased the interest and access to mapping. They link the Internet with GIS technology and are a common way of presenting dynamic maps online. The GIS processing is performed online and maps are visualised in interactive web viewers characterised by different capabilities such as zooming, panning or adding further thematic layers, with the map refreshed after each task. Depending on the system architecture and the components used, advanced symbology, map overlays from different applications and sources and their integration into a Desktop GIS are possible. This interoperability is achieved through the use of international open standards that include mechanisms for the integration and visualisation of information from multiple sources. The portable document format (PDF) is commonly used for printing and is a standard format that can be processed by many graphic software and printers without loss of information. A GeoPDF enables the sharing of geospatial maps and data in PDF documents. Multiple, independent map frames with individual spatial reference systems are possible within a GeoPDF, for example, for map overlays or insets. Geospatial functionality of a GeoPDF includes scalable map display, layer visibility control, access to attribute data, coordinate queries and spatial measurements. The full functionality of GeoPDFs requires free and user-friendly plug-ins for PDF readers and GIS software. A GeoPDF enables fundamental GIS functionality turning the formerly static PDF map into an interactive, portable georeferenced PDF map. GeoPDFs are easy to create and provide an interesting and valuable way to disseminate geomorphological maps. Our motivation to engage with the online distribution of geomorphological maps originates in the increasing number of web mapping applications available today indicating that the Internet has become a medium for displaying geographical information in rich forms and user-friendly interfaces. So, why not use the Internet to distribute geomorphological maps and enhance their practical application? Web mapping and dynamic PDF maps can play a key role in the movement towards a global dissemination of geomorphological information. This will be exemplified by live demonstrations of i.) existing geomorphological WebGIS applications, ii.) data merging from various sources using web map services, and iii.) free to download GeoPDF maps during the presentations.
DiMaio, F; Chiu, W
2016-01-01
Electron cryo-microscopy (cryoEM) has advanced dramatically to become a viable tool for high-resolution structural biology research. The ultimate outcome of a cryoEM study is an atomic model of a macromolecule or its complex with interacting partners. This chapter describes a variety of algorithms and software to build a de novo model based on the cryoEM 3D density map, to optimize the model with the best stereochemistry restraints and finally to validate the model with proper protocols. The full process of atomic structure determination from a cryoEM map is described. The tools outlined in this chapter should prove extremely valuable in revealing atomic interactions guided by cryoEM data. © 2016 Elsevier Inc. All rights reserved.
Erasing the Milky Way: New Cleaning Technique Applied to GBT Intensity Mapping Data
NASA Technical Reports Server (NTRS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masi, K.W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.;
2016-01-01
We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013), covering about 41 square degrees at 0.6 less than z is less than 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contamination using an independent component analysis technique (fastica), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps is dominated by instrumental noise on small scales which fastica, as a conservative sub-traction technique of non-Gaussian signals, can not mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the Singular Value Decomposition (SVD) method, and confirm that foreground subtraction with fastica is robust against 21cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and fastica are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping datasets.
Mapping urban green open space in Bontang city using QGIS and cloud computing
NASA Astrophysics Data System (ADS)
Agus, F.; Ramadiani; Silalahi, W.; Armanda, A.; Kusnandar
2018-04-01
Digital mapping techniques are available freely and openly so that map-based application development is easier, faster and cheaper. A rapid development of Cloud Computing Geographic Information System makes this system can help the needs of the community for the provision of geospatial information online. The presence of urban Green Open Space (GOS) provide great benefits as an oxygen supplier, carbon-binding agent and can contribute to providing comfort and beauty of city life. This study aims to propose a platform application of GIS Cloud Computing (CC) of Bontang City GOS mapping. The GIS-CC platform uses the basic map available that’s free and open source. The research used survey method to collect GOS data obtained from Bontang City Government, while application developing works Quantum GIS-CC. The result section describes the existence of GOS Bontang City and the design of GOS mapping application.
Rosas, Scott R; Ridings, John W
2017-02-01
The past decade has seen an increase of measurement development research in social and health sciences that featured the use of concept mapping as a core technique. The purpose, application, and utility of concept mapping have varied across this emerging literature. Despite the variety of uses and range of outputs, little has been done to critically review how researchers have approached the application of concept mapping in the measurement development and evaluation process. This article focuses on a review of the current state of practice regarding the use of concept mapping as methodological tool in this process. We systematically reviewed 23 scale or measure development and evaluation studies, and detail the application of concept mapping in the context of traditional measurement development and psychometric testing processes. Although several limitations surfaced, we found several strengths in the contemporary application of the method. We determined concept mapping provides (a) a solid method for establishing content validity, (b) facilitates researcher decision-making, (c) insight into target population perspectives that are integrated a priori, and (d) a foundation for analytical and interpretative choices. Based on these results, we outline how concept mapping can be situated in the measurement development and evaluation processes for new instrumentation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Maroney, Susan A; McCool, Mary Jane; Geter, Kenneth D; James, Angela M
2007-01-01
The internet is used increasingly as an effective means of disseminating information. For the past five years, the United States Department of Agriculture (USDA) Veterinary Services (VS) has published animal health information in internet-based map server applications, each oriented to a specific surveillance or outbreak response need. Using internet-based technology allows users to create dynamic, customised maps and perform basic spatial analysis without the need to buy or learn desktop geographic information systems (GIS) software. At the same time, access can be restricted to authorised users. The VS internet mapping applications to date are as follows: Equine Infectious Anemia Testing 1972-2005, National Tick Survey tick distribution maps, the Emergency Management Response System-Mapping Module for disease investigations and emergency outbreaks, and the Scrapie mapping module to assist with the control and eradication of this disease. These services were created using Environmental Systems Research Institute (ESRI)'s internet map server technology (ArcIMS). Other leading technologies for spatial data dissemination are ArcGIS Server, ArcEngine, and ArcWeb Services. VS is prototyping applications using these technologies, including the VS Atlas of Animal Health Information using ArcGIS Server technology and the Map Kiosk using ArcEngine for automating standard map production in the case of an emergency.
Dönitz, Jürgen; Wingender, Edgar
2012-01-01
The semantic web depends on the use of ontologies to let electronic systems interpret contextual information. Optimally, the handling and access of ontologies should be completely transparent to the user. As a means to this end, we have developed a service that attempts to bridge the gap between experts in a certain knowledge domain, ontologists, and application developers. The ontology-based answers (OBA) service introduced here can be embedded into custom applications to grant access to the classes of ontologies and their relations as most important structural features as well as to information encoded in the relations between ontology classes. Thus computational biologists can benefit from ontologies without detailed knowledge about the respective ontology. The content of ontologies is mapped to a graph of connected objects which is compatible to the object-oriented programming style in Java. Semantic functions implement knowledge about the complex semantics of an ontology beyond the class hierarchy and "partOf" relations. By using these OBA functions an application can, for example, provide a semantic search function, or (in the examples outlined) map an anatomical structure to the organs it belongs to. The semantic functions relieve the application developer from the necessity of acquiring in-depth knowledge about the semantics and curation guidelines of the used ontologies by implementing the required knowledge. The architecture of the OBA service encapsulates the logic to process ontologies in order to achieve a separation from the application logic. A public server with the current plugins is available and can be used with the provided connector in a custom application in scenarios analogous to the presented use cases. The server and the client are freely available if a project requires the use of custom plugins or non-public ontologies. The OBA service and further documentation is available at http://www.bioinf.med.uni-goettingen.de/projects/oba.
Dönitz, Jürgen; Wingender, Edgar
2012-01-01
The semantic web depends on the use of ontologies to let electronic systems interpret contextual information. Optimally, the handling and access of ontologies should be completely transparent to the user. As a means to this end, we have developed a service that attempts to bridge the gap between experts in a certain knowledge domain, ontologists, and application developers. The ontology-based answers (OBA) service introduced here can be embedded into custom applications to grant access to the classes of ontologies and their relations as most important structural features as well as to information encoded in the relations between ontology classes. Thus computational biologists can benefit from ontologies without detailed knowledge about the respective ontology. The content of ontologies is mapped to a graph of connected objects which is compatible to the object-oriented programming style in Java. Semantic functions implement knowledge about the complex semantics of an ontology beyond the class hierarchy and “partOf” relations. By using these OBA functions an application can, for example, provide a semantic search function, or (in the examples outlined) map an anatomical structure to the organs it belongs to. The semantic functions relieve the application developer from the necessity of acquiring in-depth knowledge about the semantics and curation guidelines of the used ontologies by implementing the required knowledge. The architecture of the OBA service encapsulates the logic to process ontologies in order to achieve a separation from the application logic. A public server with the current plugins is available and can be used with the provided connector in a custom application in scenarios analogous to the presented use cases. The server and the client are freely available if a project requires the use of custom plugins or non-public ontologies. The OBA service and further documentation is available at http://www.bioinf.med.uni-goettingen.de/projects/oba PMID:23060901
Development and application of CATIA-GDML geometry builder
NASA Astrophysics Data System (ADS)
Belogurov, S.; Berchun, Yu; Chernogorov, A.; Malzacher, P.; Ovcharenko, E.; Schetinin, V.
2014-06-01
Due to conceptual difference between geometry descriptions in Computer-Aided Design (CAD) systems and particle transport Monte Carlo (MC) codes direct conversion of detector geometry in either direction is not feasible. The paper presents an update on functionality and application practice of the CATIA-GDML geometry builder first introduced at CHEP2010. This set of CATIAv5 tools has been developed for building a MC optimized GEANT4/ROOT compatible geometry based on the existing CAD model. The model can be exported via Geometry Description Markup Language (GDML). The builder allows also import and visualization of GEANT4/ROOT geometries in CATIA. The structure of a GDML file, including replicated volumes, volume assemblies and variables, is mapped into a part specification tree. A dedicated file template, a wide range of primitives, tools for measurement and implicit calculation of parameters, different types of multiple volume instantiation, mirroring, positioning and quality check have been implemented. Several use cases are discussed.
ASSET Queries: A Set-Oriented and Column-Wise Approach to Modern OLAP
NASA Astrophysics Data System (ADS)
Chatziantoniou, Damianos; Sotiropoulos, Yannis
Modern data analysis has given birth to numerous grouping constructs and programming paradigms, way beyond the traditional group by. Applications such as data warehousing, web log analysis, streams monitoring and social networks understanding necessitated the use of data cubes, grouping variables, windows and MapReduce. In this paper we review the associated set (ASSET) concept and discuss its applicability in both continuous and traditional data settings. Given a set of values B, an associated set over B is just a collection of annotated data multisets, one for each b(B. The goal is to efficiently compute aggregates over these data sets. An ASSET query consists of repeated definitions of associated sets and aggregates of these, possibly correlated, resembling a spreadsheet document. We review systems implementing ASSET queries both in continuous and persistent contexts and argue for associated sets' analytical abilities and optimization opportunities.