Scherzinger, William M.
2016-05-01
The numerical integration of constitutive models in computational solid mechanics codes allows for the solution of boundary value problems involving complex material behavior. Metal plasticity models, in particular, have been instrumental in the development of these codes. Here, most plasticity models implemented in computational codes use an isotropic von Mises yield surface. The von Mises, of J 2, yield surface has a simple predictor-corrector algorithm - the radial return algorithm - to integrate the model.
Semi-automated surface mapping via unsupervised classification
NASA Astrophysics Data System (ADS)
D'Amore, M.; Le Scaon, R.; Helbert, J.; Maturilli, A.
2017-09-01
Due to the increasing volume of the returned data from space mission, the human search for correlation and identification of interesting features becomes more and more unfeasible. Statistical extraction of features via machine learning methods will increase the scientific output of remote sensing missions and aid the discovery of yet unknown feature hidden in dataset. Those methods exploit algorithm trained on features from multiple instrument, returning classification maps that explore intra-dataset correlation, allowing for the discovery of unknown features. We present two applications, one for Mercury and one for Vesta.
NASA Astrophysics Data System (ADS)
Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.
2017-03-01
The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.
Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill
2018-01-01
Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708
2011-01-01
0.25 s−1 to 0.75 s−1 The return mapping algorithm consists of an initial elastic predictor step, where the elastic response is assumed and the stresses...18 different loadings are used. The parameters F, G, H are solved by an iterative algorithm with C = 3. The step is repeated for different values of...a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 c⃝
Assistant for Analyzing Tropical-Rain-Mapping Radar Data
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A document is defined that describes an approach for a Tropical Rain Mapping Radar Data System (TDS). TDS is composed of software and hardware elements incorporating a two-frequency spaceborne radar system for measuring tropical precipitation. The TDS would be used primarily in generating data products for scientific investigations. The most novel part of the TDS would be expert-system software to aid in the selection of algorithms for converting raw radar-return data into such primary observables as rain rate, path-integrated rain rate, and surface backscatter. The expert-system approach would address the issue that selection of algorithms for processing the data requires a significant amount of preprocessing, non-intuitive reasoning, and heuristic application, making it infeasible, in many cases, to select the proper algorithm in real time. In the TDS, tentative selections would be made to enable conversions in real time. The expert system would remove straightforwardly convertible data from further consideration, and would examine ambiguous data, performing analysis in depth to determine which algorithms to select. Conversions performed by these algorithms, presumed to be correct, would be compared with the corresponding real-time conversions. Incorrect real-time conversions would be updated using the correct conversions.
Efficient and Scalable Graph Similarity Joins in MapReduce
Chen, Yifan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135
Efficient and scalable graph similarity joins in MapReduce.
Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.
NASA Astrophysics Data System (ADS)
Titeux, Isabelle; Li, Yuming M.; Debray, Karl; Guo, Ying-Qiao
2004-11-01
This Note deals with an efficient algorithm to carry out the plastic integration and compute the stresses due to large strains for materials satisfying the Hill's anisotropic yield criterion. The classical algorithm of plastic integration such as 'Return Mapping Method' is largely used for nonlinear analyses of structures and numerical simulations of forming processes, but it requires an iterative schema and may have convergence problems. A new direct algorithm based on a scalar method is developed which allows us to directly obtain the plastic multiplier without an iteration procedure; thus the computation time is largely reduced and the numerical problems are avoided. To cite this article: I. Titeux et al., C. R. Mecanique 332 (2004).
Comparison of simulated and actual wind shear radar data products
NASA Technical Reports Server (NTRS)
Britt, Charles L.; Crittenden, Lucille H.
1992-01-01
Prior to the development of the NASA experimental wind shear radar system, extensive computer simulations were conducted to determine the performance of the radar in combined weather and ground clutter environments. The simulation of the radar used analytical microburst models to determine weather returns and synthetic aperture radar (SAR) maps to determine ground clutter returns. These simulations were used to guide the development of hazard detection algorithms and to predict their performance. The structure of the radar simulation is reviewed. Actual flight data results from the Orlando and Denver tests are compared with simulated results. Areas of agreement and disagreement of actual and simulated results are shown.
How Albot0 finds its way home: a novel approach to cognitive mapping using robots.
Yeap, Wai K
2011-10-01
Much of what we know about cognitive mapping comes from observing how biological agents behave in their physical environments, and several of these ideas were implemented on robots, imitating such a process. In this paper a novel approach to cognitive mapping is presented whereby robots are treated as a species of their own and their cognitive mapping is being investigated. Such robots are referred to as Albots. The design of the first Albot, Albot0 , is presented. Albot0 computes an imprecise map and employs a novel method to find its way home. Both the map and the return-home algorithm exhibited characteristics commonly found in biological agents. What we have learned from Albot0 's cognitive mapping are discussed. One major lesson is that the spatiality in a cognitive map affords us rich and useful information and this argues against recent suggestions that the notion of a cognitive map is not a useful one. Copyright © 2011 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Starek, M. J.; Fernandez-diaz, J.; Pan, Z.; Glennie, C. L.; Shrestha, R. L.; Gibeaut, J. C.; Singhania, A.
2013-12-01
Researchers with the National Center for Airborne Laser Mapping (NCALM) at the University of Houston (UH) and the Coastal and Marine Geospatial Sciences Lab (CMGL) of the Harte Research Institute at Texas A&M University-Corpus Christi conducted a coordinated airborne and field-based survey of the Redfish Bay State Scientific Area to investigate the capabilities of shallow water bathymetric lidar for benthic mapping. Redfish Bay, located along the middle Texas coast of the Gulf of Mexico, is a state scientific area designated for the purposes of protecting and studying the native seagrasses. The mapped region is very shallow (< 1 m in most locations) and consists of a variety of benthic cover including sandy bottom, oyster reef, subaqueous vegetation, and submerged structures. For this survey, UH acquired high resolution (2.5 shots per square meter) bathymetry data using their new Optech Aquarius 532 nm green lidar. The field survey conducted by CMGL used an airboat to collect in-situ radiometer measurements, GPS position, depth, and ground-truth data of benthic type at over 80 locations within the bay. The return signal of an Aquarius lidar pulse is analyzed in real time by a hardware-based constant fraction discriminator (CFD) to detect returns from the surface and determine ranges (x,y,z points). This approach is commonly called discrete-return ranging, and Aquarius can record up to 4 returns per an emitted laser pulse. In contrast, full-waveform digitization records the incoming energy of an emitted pulse by sampling it at very high-frequency. Post-processing algorithms can then be applied to detect returns (ranges) from the digitized waveform. For this survey, a waveform digitizer was simultaneously operated to record the return waveforms at a rate of 1GHz with 12 bit dynamic range. High-resolution digital elevation models (DEMs) of the topo-bathymetry were derived from the discrete-return and full-waveform data to evaluate the relative and absolute accuracy using the collected ground-truth data. Results of this evaluation will be presented including an overview of the method used to extract peaks from the waveform data. Potential advantages and disadvantages of the different ranging modes in terms of observed accuracy, increased processing load, and information gain will also be discussed.
SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, J; Lin, H; Chow, J
2015-06-15
Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensitymore » of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.« less
NASA Astrophysics Data System (ADS)
Rupa, Chandra; Mujumdar, Pradeep
2016-04-01
In urban areas, quantification of extreme precipitation is important in the design of storm water drains and other infrastructure. Intensity Duration Frequency (IDF) relationships are generally used to obtain design return level for a given duration and return period. Due to lack of availability of extreme precipitation data for sufficiently large number of years, estimating the probability of extreme events is difficult. Typically, a single station data is used to obtain the design return levels for various durations and return periods, which are used in the design of urban infrastructure for the entire city. In an urban setting, the spatial variation of precipitation can be high; the precipitation amounts and patterns often vary within short distances of less than 5 km. Therefore it is crucial to study the uncertainties in the spatial variation of return levels for various durations. In this work, the extreme precipitation is modeled spatially using the Bayesian hierarchical analysis and the spatial variation of return levels is studied. The analysis is carried out with Block Maxima approach for defining the extreme precipitation, using Generalized Extreme Value (GEV) distribution for Bangalore city, Karnataka state, India. Daily data for nineteen stations in and around Bangalore city is considered in the study. The analysis is carried out for summer maxima (March - May), monsoon maxima (June - September) and the annual maxima rainfall. In the hierarchical analysis, the statistical model is specified in three layers. The data layer models the block maxima, pooling the extreme precipitation from all the stations. In the process layer, the latent spatial process characterized by geographical and climatological covariates (lat-lon, elevation, mean temperature etc.) which drives the extreme precipitation is modeled and in the prior level, the prior distributions that govern the latent process are modeled. Markov Chain Monte Carlo (MCMC) algorithm (Metropolis Hastings algorithm within a Gibbs sampler) is used to obtain the samples of parameters from the posterior distribution of parameters. The spatial maps of return levels for specified return periods, along with the associated uncertainties, are obtained for the summer, monsoon and annual maxima rainfall. Considering various covariates, the best fit model is selected using Deviance Information Criteria. It is observed that the geographical covariates outweigh the climatological covariates for the monsoon maxima rainfall (latitude and longitude). The best covariates for summer maxima and annual maxima rainfall are mean summer precipitation and mean monsoon precipitation respectively, including elevation for both the cases. The scale invariance theory, which states that statistical properties of a process observed at various scales are governed by the same relationship, is used to disaggregate the daily rainfall to hourly scales. The spatial maps of the scale are obtained for the study area. The spatial maps of IDF relationships thus generated are useful in storm water designs, adequacy analysis and identifying the vulnerable flooding areas.
On Feature Extraction from Large Scale Linear LiDAR Data
NASA Astrophysics Data System (ADS)
Acharjee, Partha Pratim
Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are presented. Significant power demand is located in urban areas, where, theoretically, a large amount of building surface area is also available for solar panel installation. Therefore, property owners and power generation companies can benefit from a citywide solar potential map, which can provide available estimated annual solar energy at a given location. An efficient solar potential measurement is a prerequisite for an effective solar energy system in an urban area. In addition, the solar potential calculation from rooftops and building facades could open up a wide variety of options for solar panel installations. However, complex urban scenes make it hard to estimate the solar potential, partly because of shadows cast by the buildings. LiDAR-based 3D city models could possibly be the right technology for solar potential mapping. Although, most of the current LiDAR-based local solar potential assessment algorithms mainly address rooftop potential calculation, whereas building facades can contribute a significant amount of viable surface area for solar panel installation. In this paper, we introduce a new algorithm to calculate solar potential of both rooftop and building facades. Solar potential received by the rooftops and facades over the year are also investigated in the test area.
Improvements in the Protein Identifier Cross-Reference service.
Wein, Samuel P; Côté, Richard G; Dumousseau, Marine; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan A
2012-07-01
The Protein Identifier Cross-Reference (PICR) service is a tool that allows users to map protein identifiers, protein sequences and gene identifiers across over 100 different source databases. PICR takes input through an interactive website as well as Representational State Transfer (REST) and Simple Object Access Protocol (SOAP) services. It returns the results as HTML pages, XLS and CSV files. It has been in production since 2007 and has been recently enhanced to add new functionality and increase the number of databases it covers. Protein subsequences can be Basic Local Alignment Search Tool (BLAST) against the UniProt Knowledgebase (UniProtKB) to provide an entry point to the standard PICR mapping algorithm. In addition, gene identifiers from UniProtKB and Ensembl can now be submitted as input or mapped to as output from PICR. We have also implemented a 'best-guess' mapping algorithm for UniProt. In this article, we describe the usefulness of PICR, how these changes have been implemented, and the corresponding additions to the web services. Finally, we explain that the number of source databases covered by PICR has increased from the initial 73 to the current 102. New resources include several new species-specific Ensembl databases as well as the Ensembl Genome ones. PICR can be accessed at http://www.ebi.ac.uk/Tools/picr/.
Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.
2015-01-01
The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.
A robust return-map algorithm for general multisurface plasticity
Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; ...
2016-06-16
Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less
Financial fluctuations anchored to economic fundamentals: A mesoscopic network approach.
Sharma, Kiran; Gopalakrishnan, Balagopal; Chakrabarti, Anindya S; Chakraborti, Anirban
2017-08-14
We demonstrate the existence of an empirical linkage between nominal financial networks and the underlying economic fundamentals, across countries. We construct the nominal return correlation networks from daily data to encapsulate sector-level dynamics and infer the relative importance of the sectors in the nominal network through measures of centrality and clustering algorithms. Eigenvector centrality robustly identifies the backbone of the minimum spanning tree defined on the return networks as well as the primary cluster in the multidimensional scaling map. We show that the sectors that are relatively large in size, defined with three metrics, viz., market capitalization, revenue and number of employees, constitute the core of the return networks, whereas the periphery is mostly populated by relatively smaller sectors. Therefore, sector-level nominal return dynamics are anchored to the real size effect, which ultimately shapes the optimal portfolios for risk management. Our results are reasonably robust across 27 countries of varying degrees of prosperity and across periods of market turbulence (2008-09) as well as periods of relative calmness (2012-13 and 2015-16).
MAGENCO: A map generalization controller for Arc/Info
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganter, J.H.; Cashwell, J.W.
The Arc/Info GENERALIZE command implements the Douglas-Peucker algorithm, a well-regarded approach that preserves line ``character`` while reducing the number of points according to a tolerance parameter supplied by the user. The authors have developed an Arc Macro Language (AML) interface called MAGENCO that allows the user to browse workspaces, select a coverage, extract a sample from this coverage, then apply various tolerances to the sample. The results are shown in multiple display windows that are arranged around the original sample for quick visual comparison. The user may then return to the whole coverage and apply the chosen tolerance. They analyzemore » the ergonomics of line simplification, explain the design (which includes an animated demonstration of the Douglas-Peucker algorithm), and discuss key points of the MAGENCO implementation.« less
Computing return times or return periods with rare event algorithms
NASA Astrophysics Data System (ADS)
Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy
2018-04-01
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
Hyperspectral feature mapping classification based on mathematical morphology
NASA Astrophysics Data System (ADS)
Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli
2016-03-01
This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gan, Yixiang; Kamlah, Marc
In this investigation, a thermo-mechanical model of pebble beds is adopted and developed based on experiments by Dr. Reimann at Forschungszentrum Karlsruhe (FZK). The framework of the present material model is composed of a non-linear elastic law, the Drucker-Prager-Cap theory, and a modified creep law. Furthermore, the volumetric inelastic strain dependent thermal conductivity of beryllium pebble beds is taken into account and full thermo-mechanical coupling is considered. Investigation showed that the Drucker-Prager-Cap model implemented in ABAQUS can not fulfill the requirements of both the prediction of large creep strains and the hardening behaviour caused by creep, which are of importancemore » with respect to the application of pebble beds in fusion blankets. Therefore, UMAT (user defined material's mechanical behaviour) and UMATHT (user defined material's thermal behaviour) routines are used to re-implement the present thermo-mechanical model in ABAQUS. An elastic predictor radial return mapping algorithm is used to solve the non-associated plasticity iteratively, and a proper tangent stiffness matrix is obtained for cost-efficiency in the calculation. An explicit creep mechanism is adopted for the prediction of time-dependent behaviour in order to represent large creep strains in high temperature. Finally, the thermo-mechanical interactions are implemented in a UMATHT routine for the coupled analysis. The oedometric compression tests and creep tests of pebble beds at different temperatures are simulated with the help of the present UMAT and UMATHT routines, and the comparison between the simulation and the experiments is made. (authors)« less
A Decade Remote Sensing River Bathymetry with the Experimental Advanced Airborne Research LiDAR
NASA Astrophysics Data System (ADS)
Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.; Skinner, K.
2012-12-01
Since 2002, the first generation of the Experimental Advanced Airborne Research LiDAR (EAARL-A) sensor has been deployed for mapping rivers and streams. We present and summarize the results of comparisons between ground truth surveys and bathymetry collected by the EAARL-A sensor in a suite of rivers across the United States. These comparisons include reaches on the Platte River (NE), Boise and Deadwood Rivers (ID), Blue and Colorado Rivers (CO), Klamath and Trinity Rivers (CA), and the Shenandoah River (VA). In addition to diverse channel morphologies (braided, single thread, and meandering) these rivers possess a variety of substrates (sand, gravel, and bedrock) and a wide range of optical characteristics which influence the attenuation and scattering of laser energy through the water column. Root mean square errors between ground truth elevations and those measured by the EAARL-A ranged from 0.15-m in rivers with relatively low turbidity and highly reflective sandy bottoms to over 0.5-m in turbid rivers with less reflective substrates. Mapping accuracy with the EAARL-A has proved challenging in pools where bottom returns are either absent in waveforms or are of such low intensity that they are treated as noise by waveform processing algorithms. Resolving bathymetry in shallow depths where near surface and bottom returns are typically convolved also presents difficulties for waveform processing routines. The results of these evaluations provide an empirical framework to discuss the capabilities and limitations of the EAARL-A sensor as well as previous generations of post-processing software for extracting bathymetry from complex waveforms. These experiences and field studies not only provide benchmarks for the evaluation of the next generation of bathymetric LiDARs for use in river mapping, but also highlight the importance of developing and standardizing more rigorous methods to characterize substrate reflectance and in-situ optical properties at study sites. They also point out the continued necessity of ground truth data for algorithm refinement and survey verification.
Rapid Mapping Of Floods Using SAR Data: Opportunities And Critical Aspects
NASA Astrophysics Data System (ADS)
Pulvirenti, Luca; Pierdicca, Nazzareno; Chini, Marco
2013-04-01
The potentiality of spaceborne Synthetic Aperture Radar (SAR) for flood mapping was demonstrated by several past investigations. The synoptic view, the capability to operate in almost all-weather conditions and during both day time and night time and the sensitivity of the microwave band to water are the key features that make SAR data useful for monitoring inundation events. In addition, their high spatial resolution, which can reach 1m with the new generation of X-band instruments such as TerraSAR-X and COSMO-SkyMed (CSK), allows emergency managers to use flood maps at very high spatial resolution. CSK gives also the possibility of performing frequent observations of regions hit by floods, thanks to the four-satellite constellation. Current research on flood mapping using SAR is focused on the development of automatic algorithms to be used in near real time applications. The approaches are generally based on the low radar return from smooth open water bodies that behave as specular reflectors and appear dark in SAR images. The major advantage of automatic algorithms is the computational efficiency that makes them suitable for rapid mapping purposes. The choice of the threshold value that, in this kind of algorithms, separates flooded from non-flooded areas is a critical aspect because it depends on the characteristics of the observed scenario and on system parameters. To deal with this aspect an algorithm for automatic detection of the regions of low backscatter has been developed. It basically accomplishes three steps: 1) division of the SAR image in a set of non-overlapping sub-images or splits; 2) selection of inhomogeneous sub-images that contain (at least) two populations of pixels, one of which is formed by dark pixels; 3) the application in sequence of an automatic thresholding algorithm and a region growing algorithm in order to produce a homogeneous map of flooded areas. Besides the aforementioned choice of the threshold, rapid mapping of floods may present other critical aspects. Searching for low SAR backscatter areas only may cause inaccuracies because flooded soils do not always act as smooth open water bodies. The presence of wind or of vegetation emerging above the water surface may give rise to an increase of the radar backscatter. In particular, mapping flooded vegetation using SAR data may represent a difficult task since backscattering phenomena in the volume between canopy, trunks and floodwater are quite complex in the presence of vegetation. A typical phenomenon is the double-bounce effect involving soil and stems or trunks, which is generally enhanced by the floodwater, so that flooded vegetation may appear very bright in a SAR image. Even in the absence of dense vegetation or wind, some regions may appear dark because of artefacts due to topography (shadowing), absorption caused by wet snow, and attenuation caused by heavy precipitating clouds (X-band SARs). Examples of the aforementioned effects that may limit the reliability of flood maps will be presented at the conference and some indications to deal with these effects (e.g. presence of vegetation and of artefacts) will be provided.
Autonomous Rock Tracking and Acquisition from a Mars Rover
NASA Technical Reports Server (NTRS)
Maimone, Mark W.; Nesnas, Issa A.; Das, Hari
1999-01-01
Future Mars exploration missions will perform two types of experiments: science instrument placement for close-up measurement, and sample acquisition for return to Earth. In this paper we describe algorithms we developed for these tasks, and demonstrate them in field experiments using a self-contained Mars Rover prototype, the Rocky 7 rover. Our algorithms perform visual servoing on an elevation map instead of image features, because the latter are subject to abrupt scale changes during the approach. 'This allows us to compensate for the poor odometry that results from motion on loose terrain. We demonstrate the successful grasp of a 5 cm long rock over 1m away using 103-degree field-of-view stereo cameras, and placement of a flexible mast on a rock outcropping over 5m away using 43 degree FOV stereo cameras.
A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns
NASA Astrophysics Data System (ADS)
Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng
2009-11-01
Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.
An Artificial Bee Colony Algorithm for Uncertain Portfolio Selection
Chen, Wei
2014-01-01
Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm. PMID:25089292
An artificial bee colony algorithm for uncertain portfolio selection.
Chen, Wei
2014-01-01
Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm.
Optimizing disk registration algorithms for nanobeam electron diffraction strain mapping
Pekin, Thomas C.; Gammer, Christoph; Ciston, Jim; ...
2017-01-28
Scanning nanobeam electron diffraction strain mapping is a technique by which the positions of diffracted disks sampled at the nanoscale over a crystalline sample can be used to reconstruct a strain map over a large area. However, it is important that the disk positions are measured accurately, as their positions relative to a reference are directly used to calculate strain. Here in this study, we compare several correlation methods using both simulated and experimental data in order to directly probe susceptibility to measurement error due to non-uniform diffracted disk illumination structure. We found that prefiltering the diffraction patterns with amore » Sobel filter before performing cross correlation or performing a square-root magnitude weighted phase correlation returned the best results when inner disk structure was present. Lastly, we have tested these methods both on simulated datasets, and experimental data from unstrained silicon as well as a twin grain boundary in 304 stainless steel.« less
NASA Technical Reports Server (NTRS)
Mcmillan, J. D.
1981-01-01
The accuracy of the GEOS-3 significant waveheight estimates compared with buoy measurements of significant waveheight were determined. A global atlas of the GEOS-3 significant waveheight estimates gathered is presented. The GEOS-3 significant waveheight estimation algorithm is derived by analyzing the return waveform characteristics of the altimeter. Convergence considerations are examined, the rationale for a smoothing technique is presented and the convergence characteristics of the smoothed estimate are discussed. The GEOS-3 data are selected for comparison with buoy measurements. The GEOS-3 significant waveheight estimates are assembled in the form of a global atlas of contour maps. Both high and low sea state contour maps are presented, and the data are displayed both by seasons and for the entire duration of the GEOS-3 mission.
Validation of a Global Hydrodynamic Flood Inundation Model
NASA Astrophysics Data System (ADS)
Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.
2014-12-01
In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.
How well should probabilistic seismic hazard maps work?
NASA Astrophysics Data System (ADS)
Vanneste, K.; Stein, S.; Camelbeeck, T.; Vleminckx, B.
2016-12-01
Recent large earthquakes that gave rise to shaking much stronger than shown in earthquake hazard maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSHA model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating the shaking history of an area with assumed distribution of earthquakes, frequency-magnitude relation, temporal occurrence model, and ground-motion prediction equation. We compare the "observed" shaking at many sites over time to that predicted by a hazard map generated for the same set of parameters. PSHA predicts that the fraction of sites at which shaking will exceed that mapped is p = 1 - exp(t/T), where t is the duration of observations and T is the map's return period. This implies that shaking in large earthquakes is typically greater than shown on hazard maps, as has occurred in a number of cases. A large number of simulated earthquake histories yield distributions of shaking consistent with this forecast, with a scatter about this value that decreases as t/T increases. The median results are somewhat lower than predicted for small values of t/T and approach the predicted value for larger values of t/T. Hence, the algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. Validation is more complicated because a real observed earthquake history can yield a fractional exceedance significantly higher or lower than that predicted while still being consistent with the hazard map in question. As a result, given that in the real world we have only a single sample, it is hard to assess whether a misfit between a map and observations arises by chance or reflects a biased map.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
High-speed polarization sensitive optical coherence tomography for retinal diagnostics
NASA Astrophysics Data System (ADS)
Yin, Biwei; Wang, Bingqing; Vemishetty, Kalyanramu; Nagle, Jim; Liu, Shuang; Wang, Tianyi; Rylander, Henry G., III; Milner, Thomas E.
2012-01-01
We report design and construction of an FPGA-based high-speed swept-source polarization-sensitive optical coherence tomography (SS-PS-OCT) system for clinical retinal imaging. Clinical application of the SS-PS-OCT system is accurate measurement and display of thickness, phase retardation and birefringence maps of the retinal nerve fiber layer (RNFL) in human subjects for early detection of glaucoma. The FPGA-based SS-PS-OCT system provides three incident polarization states on the eye and uses a bulk-optic polarization sensitive balanced detection module to record two orthogonal interference fringe signals. Interference fringe signals and relative phase retardation between two orthogonal polarization states are used to obtain Stokes vectors of light returning from each RNFL depth. We implement a Levenberg-Marquardt algorithm on a Field Programmable Gate Array (FPGA) to compute accurate phase retardation and birefringence maps. For each retinal scan, a three-state Levenberg-Marquardt nonlinear algorithm is applied to 360 clusters each consisting of 100 A-scans to determine accurate maps of phase retardation and birefringence in less than 1 second after patient measurement allowing real-time clinical imaging-a speedup of more than 300 times over previous implementations. We report application of the FPGA-based SS-PS-OCT system for real-time clinical imaging of patients enrolled in a clinical study at the Eye Institute of Austin and Duke Eye Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.
Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moietymore » with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. Finally, we also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.« less
Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.
2016-01-01
Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moiety with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. We also give examples of new applications made possible by elucidating the atomic structure of conserved moieties. PMID:27870845
Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.
2016-11-21
Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moietymore » with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. Finally, we also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.« less
Haraldsdóttir, Hulda S; Fleming, Ronan M T
2016-11-01
Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moiety with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. We also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
SU-F-T-527: A Novel Dynamic Multileaf Collimator Leaf-Sequencing Algorithm in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, J; Lin, H; Chow, J
Purpose: A novel leaf-sequencing algorithm is developed for generating arbitrary beam intensity profiles in discrete levels using dynamic multileaf collimator (MLC). The efficiency of this dynamic MLC leaf-sequencing method was evaluated using external beam treatment plans delivered by intensity modulated radiation therapy technique. Methods: To qualify and validate this algorithm, integral test for the beam segment of MLC generated by the CORVUS treatment planning system was performed with clinical intensity map experiments. The treatment plans were optimized and the fluence maps for all photon beams were determined. This algorithm started with the algebraic expression for the area under the beammore » profile. The coefficients in the expression can be transformed into the specifications for the leaf-setting sequence. The leaf optimization procedure was then applied and analyzed for clinical relevant intensity profiles in cancer treatment. Results: The macrophysical effect of this method can be described by volumetric plan evaluation tools such as dose-volume histograms (DVHs). The DVH results are in good agreement compared to those from the CORVUS treatment planning system. Conclusion: We developed a dynamic MLC method to examine the stability of leaf speed including effects of acceleration and deceleration of leaf motion in order to make sure the stability of leaf speed did not affect the intensity profile generated. It was found that the mechanical requirements were better satisfied using this method. The Project is sponsored by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.« less
VizieR Online Data Catalog: Outliers and similarity in APOGEE (Reis+, 2018)
NASA Astrophysics Data System (ADS)
Reis, I.; Poznanski, D.; Baron, D.; Zasowski, G.; Shahaf, S.
2017-11-01
t-SNE is a dimensionality reduction algorithm that is particularly well suited for the visualization of high-dimensional datasets. We use t-SNE to visualize our distance matrix. A-priori, these distances could define a space with almost as many dimensions as objects, i.e., tens of thousand of dimensions. Obviously, since many stars are quite similar, and their spectra are defined by a few physical parameters, the minimal spanning space might be smaller. By using t-SNE we can examine the structure of our sample projected into 2D. We use our distance matrix as input to the t-SNE algorithm and in return get a 2D map of the objects in our dataset. For each star in a sample of 183232 APOGEE stars, the APOGEE IDs of the 99 stars with most similar spectra (according to the method described in paper), ordered by similarity. (3 data files).
Leveraging the NLM map from SNOMED CT to ICD-10-CM to facilitate adoption of ICD-10-CM.
Cartagena, F Phil; Schaeffer, Molly; Rifai, Dorothy; Doroshenko, Victoria; Goldberg, Howard S
2015-05-01
Develop and test web services to retrieve and identify the most precise ICD-10-CM code(s) for a given clinical encounter. Facilitate creation of user interfaces that 1) provide an initial shortlist of candidate codes, ideally visible on a single screen; and 2) enable code refinement. To satisfy our high-level use cases, the analysis and design process involved reviewing available maps and crosswalks, designing the rule adjudication framework, determining necessary metadata, retrieving related codes, and iteratively improving the code refinement algorithm. The Partners ICD-10-CM Search and Mapping Services (PI-10 Services) are SOAP web services written using Microsoft's.NET 4.0 Framework, Windows Communications Framework, and SQL Server 2012. The services cover 96% of the Partners problem list subset of SNOMED CT codes that map to ICD-10-CM codes and can return up to 76% of the 69,823 billable ICD-10-CM codes prior to creation of custom mapping rules. We consider ways to increase 1) the coverage ratio of the Partners problem list subset of SNOMED CT codes and 2) the upper bound of returnable ICD-10-CM codes by creating custom mapping rules. Future work will investigate the utility of the transitive closure of SNOMED CT codes and other methods to assist in custom rule creation and, ultimately, to provide more complete coverage of ICD-10-CM codes. ICD-10-CM will be easier for clinicians to manage if applications display short lists of candidate codes from which clinicians can subsequently select a code for further refinement. The PI-10 Services support ICD-10 migration by implementing this paradigm and enabling users to consistently and accurately find the best ICD-10-CM code(s) without translation from ICD-9-CM. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Status and future of extraterrestrial mapping programs
NASA Technical Reports Server (NTRS)
Batson, R. M.
1981-01-01
Extensive mapping programs have been completed for the Earth's Moon and for the planet Mercury. Mars, Venus, and the Galilean satellites of Jupiter (Io, Europa, Ganymede, and Callisto), are currently being mapped. The two Voyager spacecraft are expected to return data from which maps can be made of as many as six of the satellites of Saturn and two or more of the satellites of Uranus. The standard reconnaissance mapping scales used for the planets are 1:25,000,000 and 1:5,000,000; where resolution of data warrants, maps are compiled at the larger scales of 1:2,000,000, 1:1,000,000 and 1:250,000. Planimetric maps of a particular planet are compiled first. The first spacecraft to visit a planet is not designed to return data from which elevations can be determined. As exploration becomes more intensive, more sophisticated missions return photogrammetric and other data to permit compilation of contour maps.
Klein, Daniel J.; Baym, Michael; Eckhoff, Philip
2014-01-01
Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by ), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which “success” is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria. PMID:25078087
Infrared dim target detection based on visual attention
NASA Astrophysics Data System (ADS)
Wang, Xin; Lv, Guofang; Xu, Lizhong
2012-11-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. Based on human visual attention mechanisms, an automatic detection algorithm for infrared dim target is presented. After analyzing the characteristics of infrared dim target images, the method firstly designs Difference of Gaussians (DoG) filters to compute the saliency map. Then the salient regions where the potential targets exist in are extracted by searching through the saliency map with a control mechanism of winner-take-all (WTA) competition and inhibition-of-return (IOR). At last, these regions are identified by the characteristics of the dim IR targets, so the true targets are detected, and the spurious objects are rejected. The experiments are performed for some real-life IR images, and the results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
Seismic migration for SAR focusing: Interferometrical applications
NASA Astrophysics Data System (ADS)
Prati, C.; Montiguarnieri, A.; Damonti, E.; Rocca, F.
SAR (Synthetic Aperture Radar) data focusing is analyzed from a theoretical point of view. Two applications of a SAR data processing algorithm are presented, where the phases of the returns are used for the recovery of interesting parameters of the observed scenes. Migration techniques, similar to those used in seismic signal processing for oil prospecting, were implemented for the determination of the terrain altitude map from a satellite and the evaluation of the sensor attitude for an airplane. A satisfying precision was achieved, since it was shown how an interferometric system is able to detect variations of the airplane roll angle of a small fraction of a degree.
Spatial Mapping of Organic Carbon in Returned Samples from Mars
NASA Astrophysics Data System (ADS)
Siljeström, S.; Fornaro, T.; Greenwalt, D.; Steele, A.
2018-04-01
To map organic material spatially to minerals present in the sample will be essential for the understanding of the origin of any organics in returned samples from Mars. It will be shown how ToF-SIMS may be used to map organics in samples from Mars.
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Automatic Generation of CFD-Ready Surface Triangulations from CAD Geometry
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Delanaye, M.; Haimes, R.; Nixon, David (Technical Monitor)
1998-01-01
This paper presents an approach for the generation of closed manifold surface triangulations from CAD geometry. CAD parts and assemblies are used in their native format, without translation, and a part's native geometry engine is accessed through a modeler-independent application programming interface (API). In seeking a robust and fully automated procedure, the algorithm is based on a new physical space manifold triangulation technique which was developed to avoid robustness issues associated with poorly conditioned mappings. In addition, this approach avoids the usual ambiguities associated with floating-point predicate evaluation on constructed coordinate geometry in a mapped space, The technique is incremental, so that each new site improves the triangulation by some well defined quality measure. Sites are inserted using a variety of priority queues to ensure that new insertions will address the worst triangles first, As a result of this strategy, the algorithm will return its 'best' mesh for a given (prespecified) number of sites. Alternatively, the algorithm may be allowed to terminate naturally after achieving a prespecified measure of mesh quality. The resulting triangulations are 'CFD-ready' in that: (1) Edges match the underlying part model to within a specified tolerance. (2) Triangles on disjoint surfaces in close proximity have matching length-scales. (3) The algorithm produces a triangulation such that no angle is less than a given angle bound, alpha, or greater than Pi - 2alpha This result also sets bounds on the maximum vertex degree, triangle aspect-ratio and maximum stretching rate for the triangulation. In addition to tile output triangulations for a variety of CAD parts, tile discussion presents related theoretical results which assert the existence of such all angle bound, and demonstrate that maximum bounds of between 25 deg and 30 deg may be achieved in practice.
Mars Mission Optimization Based on Collocation of Resources
NASA Technical Reports Server (NTRS)
Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.
2003-01-01
This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.
U.S. stock market interaction network as learned by the Boltzmann machine
Borysov, Stanislav S.; Roudi, Yasser; Balatsky, Alexander V.
2015-12-07
Here, we study historical dynamics of joint equilibrium distribution of stock returns in the U.S. stock market using the Boltzmann distribution model being parametrized by external fields and pairwise couplings. Within Boltzmann learning framework for statistical inference, we analyze historical behavior of the parameters inferred using exact and approximate learning algorithms. Since the model and inference methods require use of binary variables, effect of this mapping of continuous returns to the discrete domain is studied. The presented results show that binarization preserves the correlation structure of the market. Properties of distributions of external fields and couplings as well as themore » market interaction network and industry sector clustering structure are studied for different historical dates and moving window sizes. We demonstrate that the observed positive heavy tail in distribution of couplings is related to the sparse clustering structure of the market. We also show that discrepancies between the model’s parameters might be used as a precursor of financial instabilities.« less
Arnold, David T; Rowen, Donna; Versteegh, Matthijs M; Morley, Anna; Hooper, Clare E; Maskell, Nicholas A
2015-01-23
In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in patients in poor health states. The aim of this study was to test all existing mapping algorithms of QLQ-C30 onto EQ-5D, in a dataset of patients with malignant pleural mesothelioma, an invariably fatal malignancy where no previous mapping estimation has been published. Health related quality of life (HRQoL) data where both the EQ-5D and QLQ-C30 were used simultaneously was obtained from the UK-based prospective observational SWAMP (South West Area Mesothelioma and Pemetrexed) trial. In the original trial 73 patients with pleural mesothelioma were offered palliative chemotherapy and their HRQoL was assessed across five time points. This data was used to test the nine available mapping algorithms found in the literature, comparing predicted against observed EQ-5D values. The ability of algorithms to predict the mean, minimise error and detect clinically significant differences was assessed. The dataset had a total of 250 observations across 5 timepoints. The linear regression mapping algorithms tested generally performed poorly, over-estimating the predicted compared to observed EQ-5D values, especially when observed EQ-5D was below 0.5. The best performing algorithm used a response mapping method and predicted the mean EQ-5D with accuracy with an average root mean squared error of 0.17 (Standard Deviation; 0.22). This algorithm reliably discriminated between clinically distinct subgroups seen in the primary dataset. This study tested mapping algorithms in a population with poor health states, where they have been previously shown to perform poorly. Further research into EQ-5D estimation should be directed at response mapping methods given its superior performance in this study.
Calibrating a Rainfall-Runoff and Routing Model for the Continental United States
NASA Astrophysics Data System (ADS)
Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.
2014-12-01
Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.
A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence
NASA Astrophysics Data System (ADS)
Liu, Hui; Jin, Cong
2017-03-01
In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.
Chaotic attractors of relaxation oscillators
NASA Astrophysics Data System (ADS)
Guckenheimer, John; Wechselberger, Martin; Young, Lai-Sang
2006-03-01
We develop a general technique for proving the existence of chaotic attractors for three-dimensional vector fields with two time scales. Our results connect two important areas of dynamical systems: the theory of chaotic attractors for discrete two-dimensional Henon-like maps and geometric singular perturbation theory. Two-dimensional Henon-like maps are diffeomorphisms that limit on non-invertible one-dimensional maps. Wang and Young formulated hypotheses that suffice to prove the existence of chaotic attractors in these families. Three-dimensional singularly perturbed vector fields have return maps that are also two-dimensional diffeomorphisms limiting on one-dimensional maps. We describe a generic mechanism that produces folds in these return maps and demonstrate that the Wang-Young hypotheses are satisfied. Our analysis requires a careful study of the convergence of the return maps to their singular limits in the Ck topology for k >= 3. The theoretical results are illustrated with a numerical study of a variant of the forced van der Pol oscillator.
PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S; Gorti, Sarma B; Peter, William H
A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less
Plasmid mapping computer program.
Nolan, G P; Maina, C V; Szalay, A A
1984-01-01
Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105
Reliability of unstable periodic orbit based control strategies in biological systems.
Mishra, Nagender; Hasse, Maria; Biswal, B; Singh, Harinder P
2015-04-01
Presence of recurrent and statistically significant unstable periodic orbits (UPOs) in time series obtained from biological systems is now routinely used as evidence for low dimensional chaos. Extracting accurate dynamical information from the detected UPO trajectories is vital for successful control strategies that either aim to stabilize the system near the fixed point or steer the system away from the periodic orbits. A hybrid UPO detection method from return maps that combines topological recurrence criterion, matrix fit algorithm, and stringent criterion for fixed point location gives accurate and statistically significant UPOs even in the presence of significant noise. Geometry of the return map, frequency of UPOs visiting the same trajectory, length of the data set, strength of the noise, and degree of nonstationarity affect the efficacy of the proposed method. Results suggest that establishing determinism from unambiguous UPO detection is often possible in short data sets with significant noise, but derived dynamical properties are rarely accurate and adequate for controlling the dynamics around these UPOs. A repeat chaos control experiment on epileptic hippocampal slices through more stringent control strategy and adaptive UPO tracking is reinterpreted in this context through simulation of similar control experiments on an analogous but stochastic computer model of epileptic brain slices. Reproduction of equivalent results suggests that far more stringent criteria are needed for linking apparent success of control in such experiments with possible determinism in the underlying dynamics.
Reliability of unstable periodic orbit based control strategies in biological systems
NASA Astrophysics Data System (ADS)
Mishra, Nagender; Hasse, Maria; Biswal, B.; Singh, Harinder P.
2015-04-01
Presence of recurrent and statistically significant unstable periodic orbits (UPOs) in time series obtained from biological systems is now routinely used as evidence for low dimensional chaos. Extracting accurate dynamical information from the detected UPO trajectories is vital for successful control strategies that either aim to stabilize the system near the fixed point or steer the system away from the periodic orbits. A hybrid UPO detection method from return maps that combines topological recurrence criterion, matrix fit algorithm, and stringent criterion for fixed point location gives accurate and statistically significant UPOs even in the presence of significant noise. Geometry of the return map, frequency of UPOs visiting the same trajectory, length of the data set, strength of the noise, and degree of nonstationarity affect the efficacy of the proposed method. Results suggest that establishing determinism from unambiguous UPO detection is often possible in short data sets with significant noise, but derived dynamical properties are rarely accurate and adequate for controlling the dynamics around these UPOs. A repeat chaos control experiment on epileptic hippocampal slices through more stringent control strategy and adaptive UPO tracking is reinterpreted in this context through simulation of similar control experiments on an analogous but stochastic computer model of epileptic brain slices. Reproduction of equivalent results suggests that far more stringent criteria are needed for linking apparent success of control in such experiments with possible determinism in the underlying dynamics.
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067
Pei, Yan
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
2012-01-01
Background Structured association mapping is proving to be a powerful strategy to find genetic polymorphisms associated with disease. However, these algorithms are often distributed as command line implementations that require expertise and effort to customize and put into practice. Because of the difficulty required to use these cutting-edge techniques, geneticists often revert to simpler, less powerful methods. Results To make structured association mapping more accessible to geneticists, we have developed an automatic processing system called Auto-SAM. Auto-SAM enables geneticists to run structured association mapping algorithms automatically, using parallelization. Auto-SAM includes algorithms to discover gene-networks and find population structure. Auto-SAM can also run popular association mapping algorithms, in addition to five structured association mapping algorithms. Conclusions Auto-SAM is available through GenAMap, a front-end desktop visualization tool. GenAMap and Auto-SAM are implemented in JAVA; binaries for GenAMap can be downloaded from http://sailing.cs.cmu.edu/genamap. PMID:22471660
Intravenous lipid emulsion alters the hemodynamic response to epinephrine in a rat model.
Carreiro, Stephanie; Blum, Jared; Jay, Gregory; Hack, Jason B
2013-09-01
Intravenous lipid emulsion (ILE) is an adjunctive antidote used in selected critically ill poisoned patients. These patients may also require administration of advanced cardiac life support (ACLS) drugs. Limited data is available to describe interactions of ILE with standard ACLS drugs, specifically epinephrine. Twenty rats with intra-arterial and intravenous access were sedated with isoflurane and split into ILE or normal saline (NS) pretreatment groups. All received epinephrine 15 μm/kg intravenously (IV). Continuous mean arterial pressure (MAP) and heart rate (HR) were monitored until both indices returned to baseline. Standardized t tests were used to compare peak MAP, time to peak MAP, maximum change in HR, time to maximum change in HR, and time to return to baseline MAP/HR. There was a significant difference (p = 0.023) in time to peak MAP in the ILE group (54 s, 95 % CI 44-64) versus the NS group (40 s, 95 % CI 32-48) and a significant difference (p = 0.004) in time to return to baseline MAP in ILE group (171 s, 95 % CI 148-194) versus NS group (130 s, 95 % CI 113-147). There were no significant differences in the peak change in MAP, peak change in HR, time to minimum HR, or time to return to baseline HR between groups. ILE-pretreated rats had a significant difference in MAP response to epinephrine; ILE delayed the peak effect and prolonged the duration of effect of epinephrine on MAP, but did not alter the peak increase in MAP or the HR response.
Cloud and Aerosol Retrieval for the 2001 GLAS Satellite Lidar Mission
NASA Technical Reports Server (NTRS)
Hart, William D.; Palm, Stephen P.; Spinhirne, James D.
2000-01-01
The Geoscience Laser Altimeter System (GLAS) is scheduled for launch in July of 2001 aboard the Ice, Cloud and Land Elevation Satellite (ICESAT). In addition to being a precision altimeter for mapping the height of the Earth's icesheets, GLAS will be an atmospheric lidar, sensitive enough to detect gaseous, aerosol, and cloud backscatter signals, at horizontal and vertical resolutions of 175 and 75m, respectively. GLAS will be the first lidar to produce temporally continuous atmospheric backscatter profiles with nearly global coverage (94-degree orbital inclination). With a projected operational lifetime of five years, GLAS will collect approximately six billion lidar return profiles. The large volume of data dictates that operational analysis algorithms, which need to keep pace with the data yield of the instrument, must be efficient. So, we need to evaluate the ability of operational algorithms to detect atmospheric constituents that affect global climate. We have to quantify, in a statistical manner, the accuracy and precision of GLAS cloud and aerosol observations. Our poster presentation will show the results of modeling studies that are designed to reveal the effectiveness and sensitivity of GLAS in detecting various atmospheric cloud and aerosol features. The studies consist of analyzing simulated lidar returns. Simulation cases are constructed either from idealized renditions of atmospheric cloud and aerosol layers or from data obtained by the NASA ER-2 Cloud Lidar System (CLS). The fabricated renditions permit quantitative evaluations of operational algorithms to retrieve cloud and aerosol parameters. The use of observational data permits the evaluations of performance for actual atmospheric conditions. The intended outcome of the presentation is that climatology community will be able to use the results of these studies to evaluate and quantify the impact of GLAS data upon atmospheric modeling efforts.
A new image enhancement algorithm with applications to forestry stand mapping
NASA Technical Reports Server (NTRS)
Kan, E. P. F. (Principal Investigator); Lo, J. K.
1975-01-01
The author has identified the following significant results. Results show that the new algorithm produced cleaner classification maps in which holes of small predesignated sizes were eliminated and significant boundary information was preserved. These cleaner post-processed maps better resemble true life timber stand maps and are thus more usable products than the pre-post-processing ones: Compared to an accepted neighbor-checking post-processing technique, the new algorithm is more appropriate for timber stand mapping.
Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing
NASA Astrophysics Data System (ADS)
Williams, McKay D.
Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.
Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms
NASA Astrophysics Data System (ADS)
Samanta, A.; Todd, L. A.
A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian; Scherzinger, William
2017-01-19
Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian T.; Scherzinger, William M.
2017-01-19
A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
Numerical Conformal Mapping Using Cross-Ratios and Delaunay Triangulation
NASA Technical Reports Server (NTRS)
Driscoll, Tobin A.; Vavasis, Stephen A.
1996-01-01
We propose a new algorithm for computing the Riemann mapping of the unit disk to a polygon, also known as the Schwarz-Christoffel transformation. The new algorithm, CRDT, is based on cross-ratios of the prevertices, and also on cross-ratios of quadrilaterals in a Delaunay triangulation of the polygon. The CRDT algorithm produces an accurate representation of the Riemann mapping even in the presence of arbitrary long, thin regions in the polygon, unlike any previous conformal mapping algorithm. We believe that CRDT can never fail to converge to the correct Riemann mapping, but the correctness and convergence proof depend on conjectures that we have so far not been able to prove. We demonstrate convergence with computational experiments. The Riemann mapping has applications to problems in two-dimensional potential theory and to finite-difference mesh generation. We use CRDT to produce a mapping and solve a boundary value problem on long, thin regions for which no other algorithm can solve these problems.
Mapping snow depth return levels: smooth spatial modeling versus station interpolation
NASA Astrophysics Data System (ADS)
Blanchet, J.; Lehning, M.
2010-12-01
For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.
Tele-Operated Lunar Rover Navigation Using Lidar
NASA Technical Reports Server (NTRS)
Pedersen, Liam; Allan, Mark B.; Utz, Hans, Heinrich; Deans, Matthew C.; Bouyssounouse, Xavier; Choi, Yoonhyuk; Fluckiger, Lorenzo; Lee, Susan Y.; To, Vinh; Loh, Jonathan;
2012-01-01
Near real-time tele-operated driving on the lunar surface remains constrained by bandwidth and signal latency despite the Moon s relative proximity. As part of our work within NASA s Human-Robotic Systems Project (HRS), we have developed a stand-alone modular LIDAR based safeguarded tele-operation system of hardware, middleware, navigation software and user interface. The system has been installed and tested on two distinct NASA rovers-JSC s Centaur2 lunar rover prototype and ARC s KRex research rover- and tested over several kilometers of tele-operated driving at average sustained speeds of 0.15 - 0.25 m/s around rocks, slopes and simulated lunar craters using a deliberately constrained telemetry link. The navigation system builds onboard terrain and hazard maps, returning highest priority sections to the off-board operator as permitted by bandwidth availability. It also analyzes hazard maps onboard and can stop the vehicle prior to contacting hazards. It is robust to severe pose errors and uses a novel scan alignment algorithm to compensate for attitude and elevation errors.
New segmentation-based tone mapping algorithm for high dynamic range image
NASA Astrophysics Data System (ADS)
Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong
2017-07-01
The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-11-13
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-01-01
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027
Improving depth maps of plants by using a set of five cameras
NASA Astrophysics Data System (ADS)
Kaczmarek, Adam L.
2015-03-01
Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.
Automatic Boosted Flood Mapping from Satellite Data
NASA Technical Reports Server (NTRS)
Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence
2016-01-01
Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
A novel algorithm for fully automated mapping of geospatial ontologies
NASA Astrophysics Data System (ADS)
Chaabane, Sana; Jaziri, Wassim
2018-01-01
Geospatial information is collected from different sources thus making spatial ontologies, built for the same geographic domain, heterogeneous; therefore, different and heterogeneous conceptualizations may coexist. Ontology integrating helps creating a common repository of the geospatial ontology and allows removing the heterogeneities between the existing ontologies. Ontology mapping is a process used in ontologies integrating and consists in finding correspondences between the source ontologies. This paper deals with the "mapping" process of geospatial ontologies which consist in applying an automated algorithm in finding the correspondences between concepts referring to the definitions of matching relationships. The proposed algorithm called "geographic ontologies mapping algorithm" defines three types of mapping: semantic, topological and spatial.
Martian resource locations: Identification and optimization
NASA Astrophysics Data System (ADS)
Chamitoff, Gregory; James, George; Barker, Donald; Dershowitz, Adam
2005-04-01
The identification and utilization of in situ Martian natural resources is the key to enable cost-effective long-duration missions and permanent human settlements on Mars. This paper presents a powerful software tool for analyzing Martian data from all sources, and for optimizing mission site selection based on resource collocation. This program, called Planetary Resource Optimization and Mapping Tool (PROMT), provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in situ resource utilization. Preliminary optimization results are shown for a number of mission scenarios.
Cloud computing-based TagSNP selection algorithm for human genome data.
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-05
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used.
Cloud Computing-Based TagSNP Selection Algorithm for Human Genome Data
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-01
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088
Terrain profiling from Seasat altimetry
NASA Technical Reports Server (NTRS)
Brooks, R. L.
1981-01-01
To determine their applicability for terrain profiling, Seasat altimeter measurements were analyzed for the following geographic areas: (1) Andean salars of southern Bolivia; (2) Alaska; (3) south-central Arizona; (4) imperial Valley of California; (5) Yuma Valley of Arizona; and (6) Great Salt Lake Desert. Analysis of the data over all of these geographic areas shows that the satellite altimeter servo did not respond quickly enough to changing terrain features. However, it is demonstrated that retracking of the archived surface return waveforms yields surface elevations over smooth terrain accurate to + or - 1 m when correlated with large scale maps. The retracking algorithm used and its verification over the salars of southern Bolivia are described. Results are presented for each of the six geographic areas.
Manifold absolute pressure estimation using neural network with hybrid training algorithm
Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli
2017-01-01
In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
Optimal mapping of neural-network learning on message-passing multicomputers
NASA Technical Reports Server (NTRS)
Chu, Lon-Chan; Wah, Benjamin W.
1992-01-01
A minimization of learning-algorithm completion time is sought in the present optimal-mapping study of the learning process in multilayer feed-forward artificial neural networks (ANNs) for message-passing multicomputers. A novel approximation algorithm for mappings of this kind is derived from observations of the dominance of a parallel ANN algorithm over its communication time. Attention is given to both static and dynamic mapping schemes for systems with static and dynamic background workloads, as well as to experimental results obtained for simulated mappings on multicomputers with dynamic background workloads.
Combinatorial Algorithms for Portfolio Optimization Problems - Case of Risk Moderate Investor
NASA Astrophysics Data System (ADS)
Juarna, A.
2017-03-01
Portfolio optimization problem is a problem of finding optimal combination of n stocks from N ≥ n available stocks that gives maximal aggregate return and minimal aggregate risk. In this paper given N = 43 from the IDX (Indonesia Stock Exchange) group of the 45 most-traded stocks, known as the LQ45, with p = 24 data of monthly returns for each stock, spanned over interval 2013-2014. This problem actually is a combinatorial one where its algorithm is constructed based on two considerations: risk moderate type of investor and maximum allowed correlation coefficient between every two eligible stocks. The main outputs resulted from implementation of the algorithms is a multiple curve of three portfolio’s attributes, e.g. the size, the ratio of return to risk, and the percentage of negative correlation coefficient for every two chosen stocks, as function of maximum allowed correlation coefficient between each two stocks. The output curve shows that the portfolio contains three stocks with ratio of return to risk at 14.57 if the maximum allowed correlation coefficient between every two eligible stocks is negative and contains 19 stocks with maximum allowed correlation coefficient 0.17 to get maximum ratio of return to risk at 25.48.
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto; ...
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
Preciat Gonzalez, German A; El Assal, Lemmer R P; Noronha, Alberto; Thiele, Ines; Haraldsdóttir, Hulda S; Fleming, Ronan M T
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, many algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.
How similar are forest disturbance maps derived from different Landsat time series algorithms?
Warren B. Cohen; Sean P. Healey; Zhiqiang Yang; Stephen V. Stehman; C. Kenneth Brewer; Evan B. Brooks; Noel Gorelick; Chengqaun Huang; M. Joseph Hughes; Robert E. Kennedy; Thomas R. Loveland; Gretchen G. Moisen; Todd A. Schroeder; James E. Vogelmann; Curtis E. Woodcock; Limin Yang; Zhe Zhu
2017-01-01
Disturbance is a critical ecological process in forested systems, and disturbance maps are important for understanding forest dynamics. Landsat data are a key remote sensing dataset for monitoring forest disturbance and there recently has been major growth in the development of disturbance mapping algorithms. Many of these algorithms take advantage of the high temporal...
Conditional Random Field-Based Offline Map Matching for Indoor Environments
Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram
2016-01-01
In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm. PMID:27537892
Conditional Random Field-Based Offline Map Matching for Indoor Environments.
Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram
2016-08-16
In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm.
Range image registration based on hash map and moth-flame optimization
NASA Astrophysics Data System (ADS)
Zou, Li; Ge, Baozhen; Chen, Lei
2018-03-01
Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.
NASA Astrophysics Data System (ADS)
Caceres, Jhon
Three-dimensional (3D) models of urban infrastructure comprise critical data for planners working on problems in wireless communications, environmental monitoring, civil engineering, and urban planning, among other tasks. Photogrammetric methods have been the most common approach to date to extract building models. However, Airborne Laser Swath Mapping (ALSM) observations offer a competitive alternative because they overcome some of the ambiguities that arise when trying to extract 3D information from 2D images. Regardless of the source data, the building extraction process requires segmentation and classification of the data and building identification. In this work, approaches for classifying ALSM data, separating building and tree points, and delineating ALSM footprints from the classified data are described. Digital aerial photographs are used in some cases to verify results, but the objective of this work is to develop methods that can work on ALSM data alone. A robust approach for separating tree and building points in ALSM data is presented. The method is based on supervised learning of the classes (tree vs. building) in a high dimensional feature space that yields good class separability. Features used for classification are based on the generation of local mappings, from three-dimensional space to two-dimensional space, known as "spin images" for each ALSM point to be classified. The method discriminates ALSM returns in compact spaces and even where the classes are very close together or overlapping spatially. A modified algorithm of the Hough Transform is used to orient the spin images, and the spin image parameters are specified such that the mutual information between the spin image pixel values and class labels is maximized. This new approach to ALSM classification allows us to fully exploit the 3D point information in the ALSM data while still achieving good class separability, which has been a difficult trade-off in the past. Supported by the spin image analysis for obtaining an initial classification, an automatic approach for delineating accurate building footprints is presented. The physical fact that laser pulses that happen to strike building edges can produce very different 1st and last return elevations has been long recognized. However, in older generation ALSM systems (<50 kHz pulse rates) such points were too few and far between to delineate building footprints precisely. Furthermore, without the robust separation of nearby trees and vegetation from the buildings, simply extracting ALSM shots where the elevation of the first return was much higher than the elevation of the last return, was not a reliable means of identifying building footprints. However, with the advent of ALSM systems with pulse rates in excess of 100 kHz, and by using spin-imaged based segmentation, it is now possible to extract building edges from the point cloud. A refined classification resulting from incorporating "on-edge" information is developed for obtaining quadrangular footprints. The footprint fitting process involves line generalization, least squares-based clustering and dominant points finding for segmenting individual building edges. In addition, an algorithm for fitting complex footprints using the segmented edges and data inside footprints is also proposed.
McMahon, Christopher J; Toomey, Joshua P; Kane, Deb M
2017-01-01
We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the 'minimum gradient detection algorithm'. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Applying the new 'minimum gradient detection algorithm' CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements.
Spatial Observation and Models for Crop Water Use in Australia (Invited)
NASA Astrophysics Data System (ADS)
Hafeez, M. M.; Chemin, Y.; Rabbani, U.
2009-12-01
Recent drought in Australia and concerns about climate change have highlighted the need to manage agricultural water resources more sustainably, especially in the Murray Darling Basin which accounts for more than 70% of water for crop production. For Australian continent, approximately 90% of the precipitation that falls on the land is returned back to the atmosphere through actual evapotranspiration (ET) process. However, despite its significance nationally, it is almost impossible to measure or observe it directly at a meaningful scale in space and time through traditional point-based methods. Since late 1990's, the optical-thermal remote sensing satellite data has been extensively used for mapping of actual ET from farm to catchment scales in Australia. Numerous ET algorithms have been developed to make use of remote sensing data acquired by optical-thermal sensors mounted on airborne and satellite platforms. This article concentrates on the Murrumbidgee catchment, where ground truth data has been collected on a fortnightly basis since 2007 using two Eddy Covariance Systems (ECS) and two Large Aperture Scintillometers (LAS). Their setup absorbed variability in the landscape to measure ET-related fluxes. The ground truthing measurement data includes leaf area index (LAI) from LICOR 2000, soil heat fluxes from HuskeFlux, crop reflectance data from CROPScan and from a thermal radiometer. UAV drone equipped with multispectral scanner and thermal imager was used to get very high spatial resolution NDVI and surface temperature maps over the selected farms. This large array of high technology instruments have been used to collect specific measurements within various micro-ecosystems available in our study area. This article starts by an overview of common ET estimation algorithms based on satellite remote sensing data. The algorithms are SEBAL, METRIC, Simplified Surface Energy Balance, Two Source Energy Balance and SEBS. They are used in Australia at both regional and catchment scale for mapping actual ET using imagery from Landsat 5TM/7ETM+ and Terra-MODIS sensors. Results of ET derived from various remote sensing algorithms are matched well against the ECS and LAS data sets. These high-tech observation system are used to collect specific ground truth data to develop new empirical and semi-empirical relationship for creating a Spatial Algorithm for Mapping ET (SAM-ET) dedicated to Australian agro-ecosystems. Such estimates can underpin crop water use, crop water productivity, food security, carbon sequestration and environmental flow requirements to enhance the sustainability of agricultural systems. The next frontier is to integrate these data and models to deliver a decision support system to irrigation managers for coordinating water supply and demand, to help match crop water requirement closely in near real time environment.
Reliability of unstable periodic orbit based control strategies in biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Nagender; Singh, Harinder P.; Hasse, Maria
2015-04-15
Presence of recurrent and statistically significant unstable periodic orbits (UPOs) in time series obtained from biological systems is now routinely used as evidence for low dimensional chaos. Extracting accurate dynamical information from the detected UPO trajectories is vital for successful control strategies that either aim to stabilize the system near the fixed point or steer the system away from the periodic orbits. A hybrid UPO detection method from return maps that combines topological recurrence criterion, matrix fit algorithm, and stringent criterion for fixed point location gives accurate and statistically significant UPOs even in the presence of significant noise. Geometry ofmore » the return map, frequency of UPOs visiting the same trajectory, length of the data set, strength of the noise, and degree of nonstationarity affect the efficacy of the proposed method. Results suggest that establishing determinism from unambiguous UPO detection is often possible in short data sets with significant noise, but derived dynamical properties are rarely accurate and adequate for controlling the dynamics around these UPOs. A repeat chaos control experiment on epileptic hippocampal slices through more stringent control strategy and adaptive UPO tracking is reinterpreted in this context through simulation of similar control experiments on an analogous but stochastic computer model of epileptic brain slices. Reproduction of equivalent results suggests that far more stringent criteria are needed for linking apparent success of control in such experiments with possible determinism in the underlying dynamics.« less
Randomized Hough transform filter for echo extraction in DLR
NASA Astrophysics Data System (ADS)
Liu, Tong; Chen, Hao; Shen, Ming; Gao, Pengqi; Zhao, You
2016-11-01
The signal-to-noise ratio (SNR) of debris laser ranging (DLR) data is extremely low, and the valid returns in the DLR range residuals are distributed on a curve in a long observation time. Therefore, it is hard to extract the signals from noise in the Observed-minus-Calculated (O-C) residuals with low SNR. In order to autonomously extract the valid returns, we propose a new algorithm based on randomized Hough transform (RHT). We firstly pre-process the data using histogram method to find the zonal area that contains all the possible signals to reduce large amount of noise. Then the data is processed with RHT algorithm to find the curve that the signal points are distributed on. A new parameter update strategy is introduced in the RHT to get the best parameters. We also analyze the values of the parameters in the algorithm. We test our algorithm on the 10 Hz repetition rate DLR data from Yunnan Observatory and 100 Hz repetition rate DLR data from Graz SLR station. For 10 Hz DLR data with relative larger and similar range gate, we can process it in real time and extract all the signals autonomously with a few false readings. For 100 Hz DLR data with longer observation time, we autonomously post-process DLR data of 0.9%, 2.7%, 8% and 33% return rate with high reliability. The extracted points contain almost all signals and a low percentage of noise. Additional noise is added to 10 Hz DLR data to get lower return rate data. The valid returns can also be well extracted for DLR data with 0.18% and 0.1% return rate.
2008-01-01
CCA-MAP algorithm are analyzed. Further, we discuss the design considerations of the discussed cooperative localization algorithms to compare and...MAP and CCA-MAP to compare and evaluate their performance. Then a preliminary design analysis is given to address the implementation requirements and...plus précis, avec un nombre inférieur de nœuds ancres, comparativement aux autres types de schémas de localisation. En réalité, les algorithmes de
On the structures and mapping of auroral electrostatic potentials
NASA Technical Reports Server (NTRS)
Chiu, Y. T.; Newman, A. L.; Cornwall, J. M.
1981-01-01
The mapping of magnetospheric and ionospheric electric fields in a kinetic model of magnetospheric-ionospheric electrodynamic coupling proposed for the aurora is examined. One feature is the generalization of the kinetic current-potential relationship to the return current region (identified as a region where the parallel drop from magnetosphere to ionosphere is positive); such a return current always exists unless the ionosphere is electrically charged to grossly unphysical values. A coherent phenomenological picture of both the low energy return current and the high energy precipitation of an inverted-V is given. The mapping between magnetospheric and ionospheric electric fields is phrased in terms of a Green's function which acts as a filter, emphasizing magnetospheric latitudinal spatial scales of order (when mapped to the ionosphere) 50 to 150 km. This same length, when multiplied by electric fields just above the ionosphere, sets the scale for potential drops between the ionosphere and equatorial magnetosphere.
Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham
2015-01-01
Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943
ICESat-2 bathymetry: an empirical feasibility assessment using MABEL
NASA Astrophysics Data System (ADS)
Forfinski, Nick; Parrish, Christopher
2016-10-01
The feasibility of deriving bathymetry from data acquired with ATLAS, the photon-counting lidar on NASA's upcoming ICESat-2 satellite, is assessed empirically by examining data from NASA's airborne ICESat-2 simulator, MABEL. The primary objectives of ICESat-2 will be to measure ice-sheet elevations, sea-ice thickness, and global biomass. However, the 6-beam, green-wavelength photon-counting lidar, combined with the 91-day repeat period and near-polar orbit, may provide unique opportunities to measure coastal bathymetry in remote, poorly-mapped areas of the globe. The study focuses on high-probability bottom returns in Keweenaw Bay, Lake Superior, acquired during the "Transit to KPMD" MABEL mission in August, 2012 at an AGL altitude of 20,000 m. Water-surface and bottom returns were identified and manually classified using MABEL Viewer, an in-house prototype data-explorer web application. Water-surface returns were observed in 12 green channels, and bottom returns were observed in 10 channels. Comparing each channel's mean water-surface elevation to a regional NOAA Nowcast water-level estimate revealed channel-specific elevation biases that were corrected for in our bathymetry estimation procedure. Additionally, a first-order refraction correction was applied to each bottom return. Agreement between the refraction-corrected depth profile and NOAA data acquired two years earlier by Fugro LADS with the LADS Mk II airborne system indicates that MABEL reliably detected bathymetry in depths up to 8 m, with an RMS difference of 0.7 m. In addition to feeding coastal bathymetry models, MABEL (and potentially ICESat-2 ATLAS) has the potential to seed algorithms for bathymetry retrieval from passive, multispectral satellite imagery by providing reference depths.
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Foliage discrimination using a rotating ladar
NASA Technical Reports Server (NTRS)
Castano, A.; Matthies, L.
2003-01-01
We present a real time algorithm that detects foliage using range from a rotating laser. Objects not classified as foliage are conservatively labeled as non-driving obstacles. In contrast to related work that uses range statistics to classify objects, we exploit the expected localities and continuities of an obstacle, in both space and time. Also, instead of attempting to find a single accurate discriminating factor for every ladar return, we hypothesize the class of some few returns and then spread the confidence (and classification) to other returns using the locality constraints. The Urbie robot is presently using this algorithm to descriminate drivable grass from obstacles during outdoor autonomous navigation tasks.
Essential Autonomous Science Inference on Rovers (EASIR)
NASA Technical Reports Server (NTRS)
Roush, Ted L.; Shipman, Mark; Morris, Robert; Gazis, Paul; Pedersen, Liam
2003-01-01
Existing constraints on time, computational, and communication resources associated with Mars rover missions suggest on-board science evaluation of sensor data can contribute to decreasing human-directed operational planning, optimizing returned science data volumes, and recognition of unique or novel data. All of which act to increase the scientific return from a mission. Many different levels of science autonomy exist and each impacts the data collected and returned by, and activities of, rovers. Several computational algorithms, designed to recognize objects of interest to geologists and biologists, are discussed. The algorithms represent various functions that producing scientific opinions and several scenarios illustrate how the opinions can be used.
Managing Returns in a Catalog Distribution Center
ERIC Educational Resources Information Center
Gates, Joyce; Stuart, Julie Ann; Bonawi-tan, Winston; Loehr, Sarah
2004-01-01
The research team of the Purdue University in the United States developed an algorithm that considers several different factors, in addition to cost, to help catalog distribution centers process their returns more efficiently. A case study to teach the students important concepts involved in developing a solution to the returns disposition problem…
Parallel algorithms for mapping pipelined and parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
A VaR Algorithm for Warrants Portfolio
NASA Astrophysics Data System (ADS)
Dai, Jun; Ni, Liyun; Wang, Xiangrong; Chen, Weizhong
Based on Gamma Vega-Cornish Fish methodology, this paper propose the algorithm for calculating VaR via adjusting the quantile under the given confidence level using the four moments (e.g. mean, variance, skewness and kurtosis) of the warrants portfolio return and estimating the variance of portfolio by EWMA methodology. Meanwhile, the proposed algorithm considers the attenuation of the effect of history return on portfolio return of future days. Empirical study shows that, comparing with Gamma-Cornish Fish method and standard normal method, the VaR calculated by Gamma Vega-Cornish Fish can improve the effectiveness of forecasting the portfolio risk by virture of considering the Gamma risk and the Vega risk of the warrants. The significance test is conducted on the calculation results by employing two-tailed test developed by Kupiec. Test results show that the calculated VaRs of the warrants portfolio all pass the significance test under the significance level of 5%.
The Sentinel-3 Surface Topography Mission (S-3 STM): Level 2 SAR Ocean Retracker
NASA Astrophysics Data System (ADS)
Dinardo, S.; Lucas, B.; Benveniste, J.
2015-12-01
The SRAL Radar Altimeter, on board of the ESA Mission Sentinel-3 (S-3), has the capacity to operate either in the Pulse-Limited Mode (also known as LRM) or in the novel Synthetic Aperture Radar (SAR) mode. Thanks to the initial results from SAR Altimetry obtained exploiting CryoSat-2 data, lately the interest by the scientific community in this new technology has significantly increased and consequently the definition of accurate processing methodologies (along with validation strategies) has now assumed a capital importance. In this paper, we present the algorithm proposed to retrieve from S-3 STM SAR return waveforms the standard ocean geophysical parameters (ocean topography, wave height and sigma nought) and the validation results that have been so far achieved exploiting the CryoSat-2 data as well as the simulated data. The inversion method (retracking) to extract from the return waveform the geophysical information is a curve best-fitting scheme based on the bounded Levenberg-Marquardt Least-Squares Estimation Method (LEVMAR-LSE). The S-3 STM SAR Ocean retracking algorithm adopts, as return waveform’s model, the “SAMOSA” model [Ray et al, 2014], named after the R&D project SAMOSA (led by Satoc and funded by ESA), in which it has been initially developed. The SAMOSA model is a physically-based model that offers a complete description of a SAR Altimeter return waveform from ocean surface, expressed in the form of maps of reflected power in Delay-Doppler space (also known as stack) or expressed as multilooked echoes. SAMOSA is able to account for an elliptical antenna pattern, mispointing errors in roll and yaw, surface scattering pattern, non-linear ocean wave statistics and spherical Earth surface effects. In spite of its truly comprehensive character, the SAMOSA model comes with a compact analytical formulation expressed in term of Modified Bessel functions. The specifications of the retracking algorithm have been gathered in a technical document (DPM) and delivered as baseline for industrial implementation. For operational needs, thanks to the fine tuning of the fitting library parameters and the usage of look-up table for Bessel functions computation, the CPU execution time was accelerated over 100 times and made the execution in par with real time. In the course of the ESA-funded project CryoSat+ for Ocean (CP4O), new technical evolutions for the algorithm have been proposed (as usage of PTR width look up table and application of a stack masking). One of the main outcomes of the CP4O project was that, with these latest evolutions, the SAMOSA SAR retracking was giving equivalent results to CNES CPP retracking prototype, which was built with a totally different approach, which enforces the validation results. Work actually is underway to align the industrial implementation with the last new evolutions. Further, in order to test the algorithm with a dataset as realistic as possible, a set of simulated Test Data Set (generated by S-3 STM End-to-End Simulator) has been created by CLS following the specifications as described in a test data set requirements document drafted by ESA. In this work, we will show the baseline algorithm details, the evolutions, the impact of the evolutions and the results obtained processing the CryoSat-2 data and the simulated test data set.
Independent EEG Sources Are Dipolar
Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott
2012-01-01
Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308
Improved Microseismicity Detection During Newberry EGS Stimulations
Templeton, Dennise
2013-10-01
Effective enhanced geothermal systems (EGS) require optimal fracture networks for efficient heat transfer between hot rock and fluid. Microseismic mapping is a key tool used to infer the subsurface fracture geometry. Traditional earthquake detection and location techniques are often employed to identify microearthquakes in geothermal regions. However, most commonly used algorithms may miss events if the seismic signal of an earthquake is small relative to the background noise level or if a microearthquake occurs within the coda of a larger event. Consequently, we have developed a set of algorithms that provide improved microearthquake detection. Our objective is to investigate the microseismicity at the DOE Newberry EGS site to better image the active regions of the underground fracture network during and immediately after the EGS stimulation. Detection of more microearthquakes during EGS stimulations will allow for better seismic delineation of the active regions of the underground fracture system. This improved knowledge of the reservoir network will improve our understanding of subsurface conditions, and allow improvement of the stimulation strategy that will optimize heat extraction and maximize economic return.
Improved Microseismicity Detection During Newberry EGS Stimulations
Templeton, Dennise
2013-11-01
Effective enhanced geothermal systems (EGS) require optimal fracture networks for efficient heat transfer between hot rock and fluid. Microseismic mapping is a key tool used to infer the subsurface fracture geometry. Traditional earthquake detection and location techniques are often employed to identify microearthquakes in geothermal regions. However, most commonly used algorithms may miss events if the seismic signal of an earthquake is small relative to the background noise level or if a microearthquake occurs within the coda of a larger event. Consequently, we have developed a set of algorithms that provide improved microearthquake detection. Our objective is to investigate the microseismicity at the DOE Newberry EGS site to better image the active regions of the underground fracture network during and immediately after the EGS stimulation. Detection of more microearthquakes during EGS stimulations will allow for better seismic delineation of the active regions of the underground fracture system. This improved knowledge of the reservoir network will improve our understanding of subsurface conditions, and allow improvement of the stimulation strategy that will optimize heat extraction and maximize economic return.
A real time QRS detection using delay-coordinate mapping for the microcontroller implementation.
Lee, Jeong-Whan; Kim, Kyeong-Seop; Lee, Bongsoo; Lee, Byungchae; Lee, Myoung-Ho
2002-01-01
In this article, we propose a new algorithm using the characteristics of reconstructed phase portraits by delay-coordinate mapping utilizing lag rotundity for a real-time detection of QRS complexes in ECG signals. In reconstructing phase portrait the mapping parameters, time delay, and mapping dimension play important roles in shaping of portraits drawn in a new dimensional space. Experimentally, the optimal mapping time delay for detection of QRS complexes turned out to be 20 ms. To explore the meaning of this time delay and the proper mapping dimension, we applied a fill factor, mutual information, and autocorrelation function algorithm that were generally used to analyze the chaotic characteristics of sampled signals. From these results, we could find the fact that the performance of our proposed algorithms relied mainly on the geometrical property such as an area of the reconstructed phase portrait. For the real application, we applied our algorithm for designing a small cardiac event recorder. This system was to record patients' ECG and R-R intervals for 1 h to investigate HRV characteristics of the patients who had vasovagal syncope symptom and for the evaluation, we implemented our algorithm in C language and applied to MIT/BIH arrhythmia database of 48 subjects. Our proposed algorithm achieved a 99.58% detection rate of QRS complexes.
NASA Astrophysics Data System (ADS)
Azahari Razak, Khamarrul; Straatsma, Menno; van Westen, Cees; Malet, Jean-Philippe; de Jong, Steven M.
2010-05-01
Airborne Laser Scanning (ALS) is the state of the art technology for topographic mapping over a wide variety of spatial and temporal scales. It is also a promising technique for identification and mapping of landslides in a forested mountainous landscape. This technology demonstrates the ability to pass through the gaps between forest foliage and record the terrain height under vegetation cover. To date, most of the images either derived from satellite imagery, aerial-photograph or synthetic aperture radar are not appropriate for visual interpretation of landslide features that are covered by dense vegetation. However, it is a necessity to carefully map the landslides in order to understand its processes. This is essential for landslide hazard and risk assessment. This research demonstrates the capabilities of high resolution ALS data to recognize and identify different types of landslides in mixed forest in Barcelonnette, France and tropical rainforest in Cameron Highlands, Malaysia. ALS measurements over the 100-years old forest in Bois Noir catchment were carried out in 2007 and 2009. Both ALS dataset were captured using a Riegl laser scanner. First and last pulse with density of one point per meter square was derived from 2007 ALS dataset, whereas multiple return (of up to five returns) pulse was derived from July 2009 ALS dataset, which consists of 60 points per meter square over forested terrain. Generally, this catchment is highly affected by shallow landslides which mostly occur beneath dense vegetation. It is located in the dry intra-Alpine zone and represented by the climatic of the South French Alps. In the Cameron Highlands, first and last pulse data was captured in 2004 which covers an area of up to 300 kilometres square. Here, the Optech laser scanner was used under the Malaysian national pilot study which has slightly low point density. With precipitation intensity of up to 3000 mm per year over rugged topography and elevations up to 2800 m a.s.l., mapping the landslides under tropical rainforest which are highly vegetated and rapidly re-vegetated still remains a challenge. With the advancement of point clouds processing algorithm, high resolution Digital Terrain Models (DTMs) are becoming a very valuable data source for the production of landslide related maps. In this study, two filtering algorithms, which are based on least square interpolation and progressive TIN densification, are used to extract the bare earth surface. Quantitative and qualitative assessment that was carried out under ISPRS Working Group III/3 shown that those algorithms performed well in terms of discontinuity preservation, vegetation on the slope and high outlier influence in the point clouds. Hence, they are capable to extract ground points under difficult scenarios, especially for application under rugged forested terrain. The optimal terrain information has been exploited from ALS point clouds, particularly to preserve important landslide characteristics and to filter out unnecessary features. Morphological characteristics and geometric signatures of landslides are taken into consideration for the derivation of high-quality digital terrain model. Furthermore, ALS-derived DTMs are investigated at different spatial scales for suitable hillslopes morphology representation. Hence, appropriate 2D and 3D visualization methods are presented in such a way to help the image interpreters to detect landslides and classify them according to type, movement mechanism and activity status in forested mountainous terrain.
Locally Contractive Dynamics in Generalized Integrate-and-Fire Neurons*
Jimenez, Nicolas D.; Mihalas, Stefan; Brown, Richard; Niebur, Ernst; Rubin, Jonathan
2013-01-01
Integrate-and-fire models of biological neurons combine differential equations with discrete spike events. In the simplest case, the reset of the neuronal voltage to its resting value is the only spike event. The response of such a model to constant input injection is limited to tonic spiking. We here study a generalized model in which two simple spike-induced currents are added. We show that this neuron exhibits not only tonic spiking at various frequencies but also the commonly observed neuronal bursting. Using analytical and numerical approaches, we show that this model can be reduced to a one-dimensional map of the adaptation variable and that this map is locally contractive over a broad set of parameter values. We derive a sufficient analytical condition on the parameters for the map to be globally contractive, in which case all orbits tend to a tonic spiking state determined by the fixed point of the return map. We then show that bursting is caused by a discontinuity in the return map, in which case the map is piecewise contractive. We perform a detailed analysis of a class of piecewise contractive maps that we call bursting maps and show that they robustly generate stable bursting behavior. To the best of our knowledge, this work is the first to point out the intimate connection between bursting dynamics and piecewise contractive maps. Finally, we discuss bifurcations in this return map, which cause transitions between spiking patterns. PMID:24489486
An evolving effective stress approach to anisotropic distortional hardening
Lester, B. T.; Scherzinger, W. M.
2018-03-11
A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less
An evolving effective stress approach to anisotropic distortional hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, B. T.; Scherzinger, W. M.
A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less
Self-Directed Cooperative Planetary Rovers
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo; Morris, Robert (Technical Monitor)
2003-01-01
The project is concerned with the development of decision-theoretic techniques to optimize the scientific return of planetary rovers. Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We have developed a comprehensive solution to this problem that involves high-level tools to describe a mission; a compiler that maps a mission description and additional probabilistic models of the components of the rover into a Markov decision problem; and algorithms for solving the rover control problem that are sensitive to the limited computational resources and high-level of uncertainty in this domain.
An image-space parallel convolution filtering algorithm based on shadow map
NASA Astrophysics Data System (ADS)
Li, Hua; Yang, Huamin; Zhao, Jianping
2017-07-01
Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.
Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.
Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu
2018-08-01
To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
Recursive approach to the moment-based phase unwrapping method.
Langley, Jason A; Brice, Robert G; Zhao, Qun
2010-06-01
The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.
Burke, S L; Dorward, P K; Korner, P I
1986-09-01
In both anaesthetized and conscious rabbits, perivascular balloon inflations slowly raised or lowered mean arterial pressure (M.A.P.), at 1-2 mmHg/s, from resting to various plateau pressures. Deflations then returned the M.A.P. to resting. 'Steady-state' curves relating M.A.P. to unitary aortic baroreceptor firing, integrated aortic nerve activity and heart rate were derived during the primary and return pressure changes and they formed typical hysteresis loops. In single units, return M.A.P.-frequency curves were shifted in the same direction as the primary pressure changes by an average 0.37 mmHg per mmHg change in M.A.P. Shifts were linearly related to the changes in M.A.P. between resting and plateau levels for all pressure rises and for falls less than 30 mmHg. They were established within 30 s and were quantitatively similar to the rapid resetting of baroreceptor function curves found 15 min-2 h after a change in resting M.A.P. (Dorward, Andresen, Burke, Oliver & Korner, 1982). Unit threshold pressures were shifted within 20 s to the same extent as the over-all curve shift to which they contributed. In the whole aortic nerve, return M.A.P.-integrated activity curves were shifted to same degree as unit function curves in both anaesthetized and conscious rabbits. Simultaneous shifts of return reflex M.A.P.-heart rate curves were also seen in conscious rabbits within 30 s. During M.A.P. falls, receptor and reflex hysteresis was similar, but during M.A.P. rises, reflex shifts were double baroreceptor shifts, suggesting the involvement of other pressure-sensitive receptors. We conclude that hysteresis shifts in baroreceptor function curves, which follow the reversal of slow ramp changes in blood pressure are a form of rapid resetting. They are accompanied by rapid resetting of reflex heart rate responses. We regard this as an important mechanism in blood pressure control which produces relatively high-gain reflex responses, during slow directional pressure changes, over a wider range of absolute pressure levels than would otherwise be possible.
Greenberg, D; Istrail, S
1994-09-01
The Human Genome Project requires better software for the creation of physical maps of chromosomes. Current mapping techniques involve breaking large segments of DNA into smaller, more-manageable pieces, gathering information on all the small pieces, and then constructing a map of the original large piece from the information about the small pieces. Unfortunately, in the process of breaking up the DNA some information is lost and noise of various types is introduced; in particular, the order of the pieces is not preserved. Thus, the map maker must solve a combinatorial problem in order to reconstruct the map. Good software is indispensable for quick, accurate reconstruction. The reconstruction is complicated by various experimental errors. A major source of difficulty--which seems to be inherent to the recombination technology--is the presence of chimeric DNA clones. It is fairly common for two disjoint DNA pieces to form a chimera, i.e., a fusion of two pieces which appears as a single piece. Attempts to order chimera will fail unless they are algorithmically divided into their constituent pieces. Despite consensus within the genomic mapping community of the critical importance of correcting chimerism, algorithms for solving the chimeric clone problem have received only passing attention in the literature. Based on a model proposed by Lander (1992a, b) this paper presents the first algorithms for analyzing chimerism. We construct physical maps in the presence of chimerism by creating optimization functions which have minimizations which correlate with map quality. Despite the fact that these optimization functions are invariably NP-complete our algorithms are guaranteed to produce solutions which are close to the optimum. The practical import of using these algorithms depends on the strength of the correlation of the function to the map quality as well as on the accuracy of the approximations. We employ two fundamentally different optimization functions as a means of avoiding biases likely to decorrelate the solutions from the desired map. Experiments on simulated data show that both our algorithm which minimizes the number of chimeric fragments in a solution and our algorithm which minimizes the maximum number of fragments per clone in a solution do, in fact, correlate to high quality solutions. Furthermore, tests on simulated data using parameters set to mimic real experiments show that that the algorithms have the potential to find high quality solutions with real data. We plan to test our software against real data from the Whitehead Institute and from Los Alamos Genomic Research Center in the near future.
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
How well can we test probabilistic seismic hazard maps?
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Stein, Seth; Camelbeeck, Thierry; Vleminckx, Bart
2017-04-01
Recent large earthquakes that gave rise to shaking much stronger than shown in probabilistic seismic hazard (PSH) maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSH model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating shaking histories for an area with assumed uniform distribution of earthquakes, Gutenberg-Richter magnitude-frequency relation, Poisson temporal occurrence model, and ground-motion prediction equation (GMPE). We compare the maximum simulated shaking at many sites over time with that predicted by a hazard map generated for the same set of parameters. The Poisson model predicts that the fraction of sites at which shaking will exceed that of the hazard map is p = 1 - exp(-t/T), where t is the duration of observations and T is the map's return period. Exceedance is typically associated with infrequent large earthquakes, as observed in real cases. The ensemble of simulated earthquake histories yields distributions of fractional exceedance with mean equal to the predicted value. Hence, the PSH algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. However, simulated fractional exceedances show a large scatter about the mean value that decreases with increasing t/T, increasing observation time and increasing Gutenberg-Richter a-value (combining intrinsic activity rate and surface area), but is independent of GMPE uncertainty. This scatter is due to the variability of earthquake recurrence, and so decreases as the largest earthquakes occur in more simulations. Our results are important for evaluating the performance of a hazard map based on misfits in fractional exceedance, and for assessing whether such misfit arises by chance or reflects a bias in the map. More specifically, we determined for a broad range of Gutenberg-Richter a-values theoretical confidence intervals on allowed misfits in fractional exceedance and on the percentage of hazard-map bias that can thus be detected by comparison with observed shaking histories. Given that in the real world we only have one shaking history for an area, these results indicate that even if a hazard map does not fit the observations, it is very difficult to assess its veracity, especially for low-to-moderate-seismicity regions. Because our model is a simplified version of reality, any additional uncertainty or complexity will tend to widen these confidence intervals.
Quasi-conformal mapping with genetic algorithms applied to coordinate transformations
NASA Astrophysics Data System (ADS)
González-Matesanz, F. J.; Malpica, J. A.
2006-11-01
In this paper, piecewise conformal mapping for the transformation of geodetic coordinates is studied. An algorithm, which is an improved version of a previous algorithm published by Lippus [2004a. On some properties of piecewise conformal mappings. Eesti NSV Teaduste Akademmia Toimetised Füüsika-Matemaakika 53, 92-98; 2004b. Transformation of coordinates using piecewise conformal mapping. Journal of Geodesy 78 (1-2), 40] is presented; the improvement comes from using a genetic algorithm to partition the complex plane into convex polygons, whereas the original one did so manually. As a case study, the method is applied to the transformation of the Spanish datum ED50 and ETRS89, and both its advantages and disadvantages are discussed herein.
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi
2011-08-01
Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.
Asymmetric neighborhood functions accelerate ordering process of self-organizing maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ota, Kaiichiro; Aoki, Takaaki; Kurata, Koji
2011-02-15
A self-organizing map (SOM) algorithm can generate a topographic map from a high-dimensional stimulus space to a low-dimensional array of units. Because a topographic map preserves neighborhood relationships between the stimuli, the SOM can be applied to certain types of information processing such as data visualization. During the learning process, however, topological defects frequently emerge in the map. The presence of defects tends to drastically slow down the formation of a globally ordered topographic map. To remove such topological defects, it has been reported that an asymmetric neighborhood function is effective, but only in the simple case of mapping one-dimensionalmore » stimuli to a chain of units. In this paper, we demonstrate that even when high-dimensional stimuli are used, the asymmetric neighborhood function is effective for both artificial and real-world data. Our results suggest that applying the asymmetric neighborhood function to the SOM algorithm improves the reliability of the algorithm. In addition, it enables processing of complicated, high-dimensional data by using this algorithm.« less
Andrew T. Hudak; Nicholas L. Crookston; Jeffrey S. Evans; Michael K. Falkowski; Alistair M. S. Smith; Paul E. Gessler; Penelope Morgan
2006-01-01
We compared the utility of discrete-return light detection and ranging (lidar) data and multispectral satellite imagery, and their integration, for modeling and mapping basal area and tree density across two diverse coniferous forest landscapes in north-central Idaho. We applied multiple linear regression models subset from a suite of 26 predictor variables derived...
Myer, Gregory D; Paterno, Mark V; Ford, Kevin R; Quatman, Carmen E; Hewett, Timothy E
2006-06-01
Rehabilitation following anterior cruciate ligament (ACL) reconstruction has undergone a relatively rapid and global evolution over the past 25 years. However, there is an absence of standardized, objective criteria to accurately assess an athlete's ability to progress through the end stages of rehabilitation and safe return to sport. Return-to-sport rehabilitation, progressed by quantitatively measured functional goals, may improve the athlete's integration back into sport participation. The purpose of the following clinical commentary is to introduce an example of a criteria-driven algorithm for progression through return-to-sport rehabilitation following ACL reconstruction. Our criteria-based protocol incorporates a dynamic assessment of baseline limb strength, patient-reported outcomes, functional knee stability, bilateral limb symmetry with functional tasks, postural control, power, endurance, agility, and technique with sport-specific tasks. Although this algorithm has limitations, it serves as a foundation to expand future evidence-based evaluation and to foster critical investigation into the development of objective measures to accurately determine readiness to safely return to sport following injury.
Text image authenticating algorithm based on MD5-hash function and Henon map
NASA Astrophysics Data System (ADS)
Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue
2017-07-01
In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images
Flattening maps for the visualization of multibranched vessels.
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2005-02-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided.
Flattening Maps for the Visualization of Multibranched Vessels
Zhu, Lei; Haker, Steven; Tannenbaum, Allen
2013-01-01
In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided. PMID:15707245
Autonomous localisation of rovers for future planetary exploration
NASA Astrophysics Data System (ADS)
Bajpai, Abhinav
Future Mars exploration missions will have increasingly ambitious goals compared to current rover and lander missions. There will be a need for extremely long distance traverses over shorter periods of time. This will allow more varied and complex scientific tasks to be performed and increase the overall value of the missions. The missions may also include a sample return component, where items collected on the surface will be returned to a cache in order to be returned to Earth, for further study. In order to make these missions feasible, future rover platforms will require increased levels of autonomy, allowing them to operate without heavy reliance on a terrestrial ground station. Being able to autonomously localise the rover is an important element in increasing the rover's capability to independently explore. This thesis develops a Planetary Monocular Simultaneous Localisation And Mapping (PM-SLAM) system aimed specifically at a planetary exploration context. The system uses a novel modular feature detection and tracking algorithm called hybrid-saliency in order to achieve robust tracking, while maintaining low computational complexity in the SLAM filter. The hybrid saliency technique uses a combination of cognitive inspired saliency features with point-based feature descriptors as input to the SLAM filter. The system was tested on simulated datasets generated using the Planetary, Asteroid and Natural scene Generation Utility (PANGU) as well as two real world datasets which closely approximated images from a planetary environment. The system was shown to provide a higher accuracy of localisation estimate than a state-of-the-art VO system tested on the same data set. In order to be able to localise the rover absolutely, further techniques are investigated which attempt to determine the rover's position in orbital maps. Orbiter Mask Matching uses point-based features detected by the rover to associate descriptors with large features extracted from orbital imagery and stored in the rover memory prior the mission launch. A proof of concept is evaluated using a PANGU simulated boulder field.
Classification of fMRI resting-state maps using machine learning techniques: A comparative study
NASA Astrophysics Data System (ADS)
Gallos, Ioannis; Siettos, Constantinos
2017-11-01
We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.
Doble, Brett; Lorgelly, Paula
2016-04-01
To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.
Fatyga, Mirek; Dogan, Nesrin; Weiss, Elizabeth; Sleeman, William C; Zhang, Baoshe; Lehman, William J; Williamson, Jeffrey F; Wijesooriya, Krishni; Christensen, Gary E
2015-01-01
Commonly used methods of assessing the accuracy of deformable image registration (DIR) rely on image segmentation or landmark selection. These methods are very labor intensive and thus limited to relatively small number of image pairs. The direct voxel-by-voxel comparison can be automated to examine fluctuations in DIR quality on a long series of image pairs. A voxel-by-voxel comparison of three DIR algorithms applied to lung patients is presented. Registrations are compared by comparing volume histograms formed both with individual DIR maps and with a voxel-by-voxel subtraction of the two maps. When two DIR maps agree one concludes that both maps are interchangeable in treatment planning applications, though one cannot conclude that either one agrees with the ground truth. If two DIR maps significantly disagree one concludes that at least one of the maps deviates from the ground truth. We use the method to compare 3 DIR algorithms applied to peak inhale-peak exhale registrations of 4DFBCT data obtained from 13 patients. All three algorithms appear to be nearly equivalent when compared using DICE similarity coefficients. A comparison based on Jacobian volume histograms shows that all three algorithms measure changes in total volume of the lungs with reasonable accuracy, but show large differences in the variance of Jacobian distribution on contoured structures. Analysis of voxel-by-voxel subtraction of DIR maps shows differences between algorithms that exceed a centimeter for some registrations. Deformation maps produced by DIR algorithms must be treated as mathematical approximations of physical tissue deformation that are not self-consistent and may thus be useful only in applications for which they have been specifically validated. The three algorithms tested in this work perform fairly robustly for the task of contour propagation, but produce potentially unreliable results for the task of DVH accumulation or measurement of local volume change. Performance of DIR algorithms varies significantly from one image pair to the next hence validation efforts, which are exhaustive but performed on a small number of image pairs may not reflect the performance of the same algorithm in practical clinical situations. Such efforts should be supplemented by validation based on a longer series of images of clinical quality.
NASA Astrophysics Data System (ADS)
Jia, Duo; Wang, Cangjiao; Lei, Shaogang
2018-01-01
Mapping vegetation dynamic types in mining areas is significant for revealing the mechanisms of environmental damage and for guiding ecological construction. Dynamic types of vegetation can be identified by applying interannual normalized difference vegetation index (NDVI) time series. However, phase differences and time shifts in interannual time series decrease mapping accuracy in mining regions. To overcome these problems and to increase the accuracy of mapping vegetation dynamics, an interannual Landsat time series for optimum vegetation growing status was constructed first by using the enhanced spatial and temporal adaptive reflectance fusion model algorithm. We then proposed a Markov random field optimized semisupervised Gaussian dynamic time warping kernel-based fuzzy c-means (FCM) cluster algorithm for interannual NDVI time series to map dynamic vegetation types in mining regions. The proposed algorithm has been tested in the Shengli mining region and Shendong mining region, which are typical representatives of China's open-pit and underground mining regions, respectively. Experiments show that the proposed algorithm can solve the problems of phase differences and time shifts to achieve better performance when mapping vegetation dynamic types. The overall accuracies for the Shengli and Shendong mining regions were 93.32% and 89.60%, respectively, with improvements of 7.32% and 25.84% when compared with the original semisupervised FCM algorithm.
A novel image encryption algorithm based on chaos maps with Markov properties
NASA Astrophysics Data System (ADS)
Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang
2015-02-01
In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.
A Double Perturbation Method for Reducing Dynamical Degradation of the Digital Baker Map
NASA Astrophysics Data System (ADS)
Liu, Lingfeng; Lin, Jun; Miao, Suoxia; Liu, Bocheng
2017-06-01
The digital Baker map is widely used in different kinds of cryptosystems, especially for image encryption. However, any chaotic map which is realized on the finite precision device (e.g. computer) will suffer from dynamical degradation, which refers to short cycle lengths, low complexity and strong correlations. In this paper, a novel double perturbation method is proposed for reducing the dynamical degradation of the digital Baker map. Both state variables and system parameters are perturbed by the digital logistic map. Numerical experiments show that the perturbed Baker map can achieve good statistical and cryptographic properties. Furthermore, a new image encryption algorithm is provided as a simple application. With a rather simple algorithm, the encrypted image can achieve high security, which is competitive to the recently proposed image encryption algorithms.
Growing a hypercubical output space in a self-organizing feature map.
Bauer, H U; Villmann, T
1997-01-01
Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.
NASA Astrophysics Data System (ADS)
Strippoli, L. S.; Gonzalez-Arjona, D. G.
2018-04-01
GMV extensively worked in many activities aimed at developing, validating, and verifying up to TRL-6 advanced GNC and IP algorithms for Mars Sample Return rendezvous working under different ESA contracts on the development of advanced algorithms for VBN sensor.
Processing and evaluation of riverine waveforms acquired by an experimental bathymetric LiDAR
NASA Astrophysics Data System (ADS)
Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.
2010-12-01
Accurate mapping of fluvial environments with airborne bathymetric LiDAR is challenged not only by environmental characteristics but also the development and application of software routines to post-process the recorded laser waveforms. During a bathymetric LiDAR survey, the transmission of the green-wavelength laser pulses through the water column is influenced by a number of factors including turbidity, the presence of organic material, and the reflectivity of the streambed. For backscattered laser pulses returned from the river bottom and digitized by the LiDAR detector, post-processing software is needed to interpret and identify distinct inflections in the reflected waveform. Relevant features of this energy signal include the air-water interface, volume reflection from the water column itself, and, ideally, a strong return from the bottom. We discuss our efforts to acquire, analyze, and interpret riverine surveys using the USGS Experimental Advanced Airborne Research LiDAR (EAARL) in a variety of fluvial environments. Initial processing of data collected in the Trinity River, California, using the EAARL Airborne Lidar Processing Software (ALPS) highlighted the difficulty of retrieving a distinct bottom signal in deep pools. Examination of laser waveforms from these pools indicated that weak bottom reflections were often neglected by a trailing edge algorithm used by ALPS to process shallow riverine waveforms. For the Trinity waveforms, this algorithm had a tendency to identify earlier inflections as the bottom, resulting in a shallow bias. Similarly, an EAARL survey along the upper Colorado River, Colorado, also revealed the inadequacy of the trailing edge algorithm for detecting weak bottom reflections. We developed an alternative waveform processing routine by exporting digitized laser waveforms from ALPS, computing the local extrema, and fitting Gaussian curves to the convolved backscatter. Our field data indicate that these techniques improved the definition of pool areas dominated by weak bottom reflections. These processing techniques are also being tested for EAARL surveys collected along the Platte and Klamath Rivers where environmental conditions have resulted in suppressed or convolved bottom reflections.
EAARL coastal topography--Alligator Point, Louisiana, 2010
Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Wright, C.W.; Brock, J.C.; Nagle, D.B.; Vivekanandan, Saisudha; Fredericks, Xan; Barras, J.A.
2012-01-01
This project provides highly detailed and accurate datasets of a portion of Alligator Point, Louisiana, acquired on March 5 and 6, 2010. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations.
The Structure-Mapping Engine: Algorithm and Examples.
ERIC Educational Resources Information Center
Falkenhainer, Brian; And Others
This description of the Structure-Mapping Engine (SME), a flexible, cognitive simulation program for studying analogical processing which is based on Gentner's Structure-Mapping theory of analogy, points out that the SME provides a "tool kit" for constructing matching algorithms consistent with this theory. This report provides: (1) a…
Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E
2017-05-01
Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.
NASA Astrophysics Data System (ADS)
Spaulding, M. L.
2015-12-01
The vision for STORMTOOLS is to provide access to a suite of coastal planning tools (numerical models et al), available as a web service, that allows wide spread accessibly and applicability at high resolution for user selected coastal areas of interest. The first product developed under this framework were flood inundation maps, with and without sea level rise, for varying return periods for RI coastal waters. The flood mapping methodology is based on using the water level vs return periods at a primary NOAA water level gauging station and then spatially scaling the values, based on the predictions of high resolution, storm and wave simulations performed by Army Corp of Engineers, North Atlantic Comprehensive Coastal Study (NACCS) for tropical and extratropical storms on an unstructured grid, to estimate inundation levels for varying return periods. The scaling for the RI application used Newport, RI water levels as the reference point. Predictions are provided for once in 25, 50, and 100 yr return periods (at the upper 95% confidence level), with sea level rises of 1, 2, 3, and 5 ft. Simulations have also been performed for historical hurricane events including 1938, Carol (1954), Bob (1991), and Sandy (2012) and nuisance flooding events with return periods of 1, 3, 5, and 10 yr. Access to the flooding maps is via a web based, map viewer that seamlessly covers all coastal waters of the state at one meter resolution. The GIS structure of the map viewer allows overlays of additional relevant data sets (roads and highways, wastewater treatment facilities, schools, hospitals, emergency evacuation routes, etc.) as desired by the user. The simplified flooding maps are publically available and are now being implemented for state and community resilience planning and vulnerability assessment activities in response to climate change impacts.
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map.
Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen
2015-09-11
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate.
A multiscale curvature algorithm for classifying discrete return LiDAR in forested environments
Jeffrey S. Evans; Andrew T. Hudak
2007-01-01
One prerequisite to the use of light detection and ranging (LiDAR) across disciplines is differentiating ground from nonground returns. The objective was to automatically and objectively classify points within unclassified LiDAR point clouds, with few model parameters and minimal postprocessing. Presented is an automated method for classifying LiDAR returns as ground...
Automated Studies of Continuing Current in Lightning Flashes
NASA Astrophysics Data System (ADS)
Martinez-Claros, Jose
Continuing current (CC) is a continuous luminosity in the lightning channel that lasts longer than 10 ms following a lightning return stroke to ground. Lightning flashes following CC are associated with direct damage to power lines and are thought to be responsible for causing lightning-induced forest fires. The development of an algorithm that automates continuing current detection by combining NLDN (National Lightning Detection Network) and LEFA (Langmuir Electric Field Array) datasets for CG flashes will be discussed. The algorithm was applied to thousands of cloud-to-ground (CG) flashes within 40 km of Langmuir Lab, New Mexico measured during the 2013 monsoon season. It counts the number of flashes in a single minute of data and the number of return strokes of an individual lightning flash; records the time and location of each return stroke; performs peak analysis on E-field data, and uses the slope of interstroke interval (ISI) E-field data fits to recognize whether continuing current (CC) exists within the interval. Following CC detection, duration and magnitude are measured. The longest observed C in 5588 flashes was 631 ms. The performance of the algorithm (vs. human judgement) was checked on 100 flashes. At best, the reported algorithm is "correct" 80% of the time, where correct means that multiple stations agree with each other and with a human on both the presence and duration of CC. Of the 100 flashes that were validated against human judgement, 62% were hybrid. Automated analysis detects the first but misses the second return stroke in many cases where the second return stroke is followed by long CC. This problem is also present in human interpretation of field change records.
NASA Astrophysics Data System (ADS)
de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.
In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.
Uncertain programming models for portfolio selection with uncertain returns
NASA Astrophysics Data System (ADS)
Zhang, Bo; Peng, Jin; Li, Shengguo
2015-10-01
In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
Backup Attitude Control Algorithms for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Andrews, Stephen F.; Ericsson-Jackson, Aprille J.; Flatley, Thomas W.; Ward, David K.; Bay, P. Michael
1999-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The MAP spacecraft will perform its mission, studying the early origins of the universe, in a Lissajous orbit around the Earth-Sun L(sub 2) Lagrange point. Due to limited mass, power, and financial resources, a traditional reliability concept involving fully redundant components was not feasible. This paper will discuss the redundancy philosophy used on MAP, describe the hardware redundancy selected (and why), and present backup modes and algorithms that were designed in lieu of additional attitude control hardware redundancy to improve the odds of mission success. Three of these modes have been implemented in the spacecraft flight software. The first onboard mode allows the MAP Kalman filter to be used with digital sun sensor (DSS) derived rates, in case of the failure of one of MAP's two two-axis inertial reference units. Similarly, the second onboard mode allows a star tracker only mode, using attitude and derived rate from one or both of MAP's star trackers for onboard attitude determination and control. The last backup mode onboard allows a sun-line angle offset to be commanded that will allow solar radiation pressure to be used for momentum management and orbit stationkeeping. In addition to the backup modes implemented on the spacecraft, two backup algorithms have been developed in the event of less likely contingencies. One of these is an algorithm for implementing an alternative scan pattern to MAP's nominal dual-spin science mode using only one or two reaction wheels and thrusters. Finally, an algorithm has been developed that uses thruster one shots while in science mode for momentum management. This algorithm has been developed in case system momentum builds up faster than anticipated, to allow adequate momentum management while minimizing interruptions to science. In this paper, each mode and algorithm will be discussed, and simulation results presented.
NASA Astrophysics Data System (ADS)
Gruber, Thomas; Grim, Larry; Fauth, Ryan; Tercha, Brian; Powell, Chris; Steinhardt, Kristin
2011-05-01
Large networks of disparate chemical/biological (C/B) sensors, MET sensors, and intelligence, surveillance, and reconnaissance (ISR) sensors reporting to various command/display locations can lead to conflicting threat information, questions of alarm confidence, and a confused situational awareness. Sensor netting algorithms (SNA) are being developed to resolve these conflicts and to report high confidence consensus threat map data products on a common operating picture (COP) display. A data fusion algorithm design was completed in a Phase I SBIR effort and development continues in the Phase II SBIR effort. The initial implementation and testing of the algorithm has produced some performance results. The algorithm accepts point and/or standoff sensor data, and event detection data (e.g., the location of an explosion) from various ISR sensors (e.g., acoustic, infrared cameras, etc.). These input data are preprocessed to assign estimated uncertainty to each incoming piece of data. The data are then sent to a weighted tomography process to obtain a consensus threat map, including estimated threat concentration level uncertainty. The threat map is then tested for consistency and the overall confidence for the map result is estimated. The map and confidence results are displayed on a COP. The benefits of a modular implementation of the algorithm and comparisons of fused / un-fused data results will be presented. The metrics for judging the sensor-netting algorithm performance are warning time, threat map accuracy (as compared to ground truth), false alarm rate, and false alarm rate v. reported threat confidence level.
Wen, Qing; Kim, Chang-Sik; Hamilton, Peter W; Zhang, Shu-Dong
2016-05-11
Gene expression connectivity mapping has gained much popularity recently with a number of successful applications in biomedical research testifying its utility and promise. Previously methodological research in connectivity mapping mainly focused on two of the key components in the framework, namely, the reference gene expression profiles and the connectivity mapping algorithms. The other key component in this framework, the query gene signature, has been left to users to construct without much consensus on how this should be done, albeit it has been an issue most relevant to end users. As a key input to the connectivity mapping process, gene signature is crucially important in returning biologically meaningful and relevant results. This paper intends to formulate a standardized procedure for constructing high quality gene signatures from a user's perspective. We describe a two-stage process for making quality gene signatures using gene expression data as initial inputs. First, a differential gene expression analysis comparing two distinct biological states; only the genes that have passed stringent statistical criteria are considered in the second stage of the process, which involves ranking genes based on statistical as well as biological significance. We introduce a "gene signature progression" method as a standard procedure in connectivity mapping. Starting from the highest ranked gene, we progressively determine the minimum length of the gene signature that allows connections to the reference profiles (drugs) being established with a preset target false discovery rate. We use a lung cancer dataset and a breast cancer dataset as two case studies to demonstrate how this standardized procedure works, and we show that highly relevant and interesting biological connections are returned. Of particular note is gefitinib, identified as among the candidate therapeutics in our lung cancer case study. Our gene signature was based on gene expression data from Taiwan female non-smoker lung cancer patients, while there is evidence from independent studies that gefitinib is highly effective in treating women, non-smoker or former light smoker, advanced non-small cell lung cancer patients of Asian origin. In summary, we introduced a gene signature progression method into connectivity mapping, which enables a standardized procedure for constructing high quality gene signatures. This progression method is particularly useful when the number of differentially expressed genes identified is large, and when there is a need to prioritize them to be included in the query signature. The results from two case studies demonstrate that the approach we have developed is capable of obtaining pertinent candidate drugs with high precision.
Planning the FUSE Mission Using the SOVA Algorithm
NASA Technical Reports Server (NTRS)
Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly
2011-01-01
Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.
First results in terrain mapping for a roving planetary explorer
NASA Technical Reports Server (NTRS)
Krotkov, E.; Caillas, C.; Hebert, M.; Kweon, I. S.; Kanade, Takeo
1989-01-01
To perform planetary exploration without human supervision, a complete autonomous rover must be able to model its environment while exploring its surroundings. Researchers present a new algorithm to construct a geometric terrain representation from a single range image. The form of the representation is an elevation map that includes uncertainty, unknown areas, and local features. By virtue of working in spherical-polar space, the algorithm is independent of the desired map resolution and the orientation of the sensor, unlike other algorithms that work in Cartesian space. They also describe new methods to evaluate regions of the constructed elevation maps to support legged locomotion over rough terrain.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
NASA Astrophysics Data System (ADS)
Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.
2018-02-01
Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-08-31
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.
A Probabilistic Feature Map-Based Localization System Using a Monocular Camera
Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun
2015-01-01
Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284
An optimization method of VON mapping for energy efficiency and routing in elastic optical networks
NASA Astrophysics Data System (ADS)
Liu, Huanlin; Xiong, Cuilian; Chen, Yong; Li, Changping; Chen, Derun
2018-03-01
To improve resources utilization efficiency, network virtualization in elastic optical networks has been developed by sharing the same physical network for difference users and applications. In the process of virtual nodes mapping, longer paths between physical nodes will consume more spectrum resources and energy. To address the problem, we propose a virtual optical network mapping algorithm called genetic multi-objective optimize virtual optical network mapping algorithm (GM-OVONM-AL), which jointly optimizes the energy consumption and spectrum resources consumption in the process of virtual optical network mapping. Firstly, a vector function is proposed to balance the energy consumption and spectrum resources by optimizing population classification and crowding distance sorting. Then, an adaptive crossover operator based on hierarchical comparison is proposed to improve search ability and convergence speed. In addition, the principle of the survival of the fittest is introduced to select better individual according to the relationship of domination rank. Compared with the spectrum consecutiveness-opaque virtual optical network mapping-algorithm and baseline-opaque virtual optical network mapping algorithm, simulation results show the proposed GM-OVONM-AL can achieve the lowest bandwidth blocking probability and save the energy consumption.
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
NASA Astrophysics Data System (ADS)
Lang, K. A.; Petrie, G.
2014-12-01
Extended field-based summer courses provide an invaluable field experience for undergraduate majors in the geosciences. These courses often utilize the construction of geological maps and structural cross sections as the primary pedagogical tool to teach basic map orientation, rock identification and structural interpretation. However, advances in the usability and ubiquity of Geographic Information Systems in these courses presents new opportunities to evaluate student work. In particular, computer-based quantification of systematic mapping errors elucidates the factors influencing student success in the field. We present a case example from a mapping exercise conducted in a summer Field Geology course at a popular field location near Dillon, Montana. We use a computer algorithm to automatically compare the placement and attribution of unit contacts with spatial variables including topographic slope, aspect, bedding attitude, ground cover and distance from starting location. We compliment analyses with anecdotal and survey data that suggest both physical factors (e.g. steep topographic slope) as well as structural nuance (e.g. low angle bedding) may dominate student frustration, particularly in courses with a high student to instructor ratio. We propose mechanisms to improve student experience by allowing students to practice skills with orientation games and broadening student background with tangential lessons (e.g. on colluvial transport processes). As well, we suggest low-cost ways to decrease the student to instructor ratio by supporting returning undergraduates from previous years or staging mapping over smaller areas. Future applications of this analysis might include a rapid and objective system for evaluation of student maps (including point-data, such as attitude measurements) and quantification of temporal trends in student work as class sizes, pedagogical approaches or environmental variables change. Long-term goals include understanding and characterizing stochasticity in geological mapping beyond the undergraduate classroom, and better quantifying uncertainty in published map products.
NASA Astrophysics Data System (ADS)
Sullivan, J. T.; McGee, T. J.; Leblanc, T.; Sumnicht, G. K.; Twigg, L. W.
2015-10-01
The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL) is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical ozone concentrations and ozone layers aloft, especially during air quality episodes. For these reasons, this paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and confirm that it is properly representing ozone concentrations. This paper is focused on ensuring the TROPOZ algorithm is properly quantifying ozone concentrations, and a following paper will focus on a systematic uncertainty analysis. This methodology begins by simulating synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile. This was then systematically performed to identify any areas that need refinement for a new operational version of the TROPOZ retrieval algorithm. One immediate outcome of this exercise was that a bin registration error in the correction for detector saturation within the original retrieval was discovered and was subsequently corrected for. Another noticeable outcome was that the vertical smoothing in the retrieval algorithm was upgraded from a constant vertical resolution to a variable vertical resolution to yield a statistical uncertainty of <10 %. This new and optimized vertical-resolution scheme retains the ability to resolve fluctuations in the known ozone profile, but it now allows near-field signals to be more appropriately smoothed. With these revisions to the previous TROPOZ retrieval, the optimized TROPOZ retrieval algorithm (TROPOZopt) has been effective in retrieving nearly 200 m lower to the surface. Also, as compared to the previous version of the retrieval, the TROPOZopt had an overall mean improvement of 3.5 %, and large improvements (upwards of 10-15 % as compared to the previous algorithm) were apparent between 4.5 and 9 km. Finally, to ensure the TROPOZopt retrieval algorithm is robust enough to handle actual lidar return signals, a comparison is shown between four nearby ozonesonde measurements. The ozonesondes are mostly within the TROPOZopt retrieval uncertainty bars, which implies that this exercise was quite successful.
Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio
2016-02-04
Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle's location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent.
Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio
2016-01-01
Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle’s location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent. PMID:26861320
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map
Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen
2015-01-01
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate. PMID:26378543
FRS Geospatial Return File Format
The Geospatial Return File Format describes format that needs to be used to submit latitude and longitude coordinates for use in Envirofacts mapping applications. These coordinates are stored in the Geospatail Reference Tables.
Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.
Dastmalchi, Pouya; Veronis, Georgios
2013-12-30
We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.
Mapping the conformational free energy of aspartic acid in the gas phase and in aqueous solution.
Comitani, Federico; Rossi, Kevin; Ceriotti, Michele; Sanz, M Eugenia; Molteni, Carla
2017-04-14
The conformational free energy landscape of aspartic acid, a proteogenic amino acid involved in a wide variety of biological functions, was investigated as an example of the complexity that multiple rotatable bonds produce even in relatively simple molecules. To efficiently explore such a landscape, this molecule was studied in the neutral and zwitterionic forms, in the gas phase and in water solution, by means of molecular dynamics and the enhanced sampling method metadynamics with classical force-fields. Multi-dimensional free energy landscapes were reduced to bi-dimensional maps through the non-linear dimensionality reduction algorithm sketch-map to identify the energetically stable conformers and their interconnection paths. Quantum chemical calculations were then performed on the minimum free energy structures. Our procedure returned the low energy conformations observed experimentally in the gas phase with rotational spectroscopy [M. E. Sanz et al., Phys. Chem. Chem. Phys. 12, 3573 (2010)]. Moreover, it provided information on higher energy conformers not accessible to experiments and on the conformers in water. The comparison between different force-fields and quantum chemical data highlighted the importance of the underlying potential energy surface to accurately capture energy rankings. The combination of force-field based metadynamics, sketch-map analysis, and quantum chemical calculations was able to produce an exhaustive conformational exploration in a range of significant free energies that complements the experimental data. Similar protocols can be applied to larger peptides with complex conformational landscapes and would greatly benefit from the next generation of accurate force-fields.
Mapping the conformational free energy of aspartic acid in the gas phase and in aqueous solution
NASA Astrophysics Data System (ADS)
Comitani, Federico; Rossi, Kevin; Ceriotti, Michele; Sanz, M. Eugenia; Molteni, Carla
2017-04-01
The conformational free energy landscape of aspartic acid, a proteogenic amino acid involved in a wide variety of biological functions, was investigated as an example of the complexity that multiple rotatable bonds produce even in relatively simple molecules. To efficiently explore such a landscape, this molecule was studied in the neutral and zwitterionic forms, in the gas phase and in water solution, by means of molecular dynamics and the enhanced sampling method metadynamics with classical force-fields. Multi-dimensional free energy landscapes were reduced to bi-dimensional maps through the non-linear dimensionality reduction algorithm sketch-map to identify the energetically stable conformers and their interconnection paths. Quantum chemical calculations were then performed on the minimum free energy structures. Our procedure returned the low energy conformations observed experimentally in the gas phase with rotational spectroscopy [M. E. Sanz et al., Phys. Chem. Chem. Phys. 12, 3573 (2010)]. Moreover, it provided information on higher energy conformers not accessible to experiments and on the conformers in water. The comparison between different force-fields and quantum chemical data highlighted the importance of the underlying potential energy surface to accurately capture energy rankings. The combination of force-field based metadynamics, sketch-map analysis, and quantum chemical calculations was able to produce an exhaustive conformational exploration in a range of significant free energies that complements the experimental data. Similar protocols can be applied to larger peptides with complex conformational landscapes and would greatly benefit from the next generation of accurate force-fields.
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
NASA Technical Reports Server (NTRS)
Kweon, In SO; Hebert, Martial; Kanade, Takeo
1989-01-01
A three-dimensional perception system for building a geometrical description of rugged terrain environments from range image data is presented with reference to the exploration of the rugged terrain of Mars. An intermediate representation consisting of an elevation map that includes an explicit representation of uncertainty and labeling of the occluded regions is proposed. The locus method used to convert range image to an elevation map is introduced, along with an uncertainty model based on this algorithm. Both the elevation map and the locus method are the basis of a terrain matching algorithm which does not assume any correspondences between range images. The two-stage algorithm consists of a feature-based matching algorithm to compute an initial transform and an iconic terrain matching algorithm to merge multiple range images into a uniform representation. Terrain modeling results on real range images of rugged terrain are presented. The algorithms considered are a fundamental part of the perception system for the Ambler, a legged locomotor.
Diversified models for portfolio selection based on uncertain semivariance
NASA Astrophysics Data System (ADS)
Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini
2017-02-01
Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.
Comparison of three methods for materials identification and mapping with imaging spectroscopy
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg; Boardman, Joe; Kruse, Fred
1993-01-01
We are comparing three methods of mapping analysis tools for imaging spectroscopy data. The purpose of this comparison is to understand the advantages and disadvantages of each algorithm so others would be better able to choose the best algorithm or combinations of algorithms for a particular problem. The three algorithms are: (1) the spectralfeature modified least squares mapping algorithm of Clark et al (1990, 1991): programs mbandmap and tricorder; (2) the Spectral Angle Mapper Algorithm(Boardman, 1993) found in the CU CSES SIPS package; and (3) the Expert System of Kruse et al. (1993). The comparison uses a ground-calibrated 1990 AVIRIS scene of 400 by 410 pixels over Cuprite, Nevada. Along with the test data set is a spectral library of 38 minerals. Each algorithm is tested with the same AVIRIS data set and spectral library. Field work has confirmed the presence of many of these minerals in the AVIRIS scene (Swayze et al. 1992).
Lossless Compression of Classification-Map Data
NASA Technical Reports Server (NTRS)
Hua, Xie; Klimesh, Matthew
2009-01-01
A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.
An improved image non-blind image deblurring method based on FoEs
NASA Astrophysics Data System (ADS)
Zhu, Qidan; Sun, Lei
2013-03-01
Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.
Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa
2015-11-15
Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interest<100% of cycle length (CL); <95% tachycardia CL mapped; variability of CL and/or unstable fiducial reference marker; and suboptimal bar height and scar settings. A data collection and map interpretation algorithm has been developed to optimize Ripple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Einstein, Daniel R.; Kuprat, Andrew P.; Jiao, Xiangmin
2013-01-01
Geometries for organ scale and multiscale simulations of organ function are now routinely derived from imaging data. However, medical images may also contain spatially heterogeneous information other than geometry that are relevant to such simulations either as initial conditions or in the form of model parameters. In this manuscript, we present an algorithm for the efficient and robust mapping of such data to imaging based unstructured polyhedral grids in parallel. We then illustrate the application of our mapping algorithm to three different mapping problems: 1) the mapping of MRI diffusion tensor data to an unstuctured ventricular grid; 2) the mappingmore » of serial cyro-section histology data to an unstructured mouse brain grid; and 3) the mapping of CT-derived volumetric strain data to an unstructured multiscale lung grid. Execution times and parallel performance are reported for each case.« less
NASA Technical Reports Server (NTRS)
Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak
2003-01-01
In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.
Mbagwu, Michael; French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J
2016-05-04
Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently available, and will be made available online at https://phekb.org.
French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J
2016-01-01
Background Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Objective Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. Methods We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Results Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Conclusions Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently available, and will be made available online at https://phekb.org. PMID:27146002
Mobile robot motion estimation using Hough transform
NASA Astrophysics Data System (ADS)
Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu
2018-05-01
This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Design and application of star map simulation system for star sensors
NASA Astrophysics Data System (ADS)
Wu, Feng; Shen, Weimin; Zhu, Xifang; Chen, Yuheng; Xu, Qinquan
2013-12-01
Modern star sensors are powerful to measure attitude automatically which assure a perfect performance of spacecrafts. They achieve very accurate attitudes by applying algorithms to process star maps obtained by the star camera mounted on them. Therefore, star maps play an important role in designing star cameras and developing procession algorithms. Furthermore, star maps supply significant supports to exam the performance of star sensors completely before their launch. However, it is not always convenient to supply abundant star maps by taking pictures of the sky. Thus, star map simulation with the aid of computer attracts a lot of interests by virtue of its low price and good convenience. A method to simulate star maps by programming and extending the function of the optical design program ZEMAX is proposed. The star map simulation system is established. Firstly, based on analyzing the working procedures of star sensors to measure attitudes and the basic method to design optical system by ZEMAX, the principle of simulating star sensor imaging is given out in detail. The theory about adding false stars and noises, and outputting maps is discussed and the corresponding approaches are proposed. Then, by external programming, the star map simulation program is designed and produced. Its user interference and operation are introduced. Applications of star map simulation method in evaluating optical system, star image extraction algorithm and star identification algorithm, and calibrating system errors are presented completely. It was proved that the proposed simulation method provides magnificent supports to the study on star sensors, and improves the performance of star sensors efficiently.
Doran, Kara S.; Sallenger, Asbury H.; Reynolds, Billy J.; Wright, C. Wayne
2010-01-01
The NASA Experimental Advanced Airborne Lidar (EAARL) is an airborne lidar (light detection and ranging) instrument designed to map coastal topography and bathymetry. The EAARL system has the capability to capture each laser-pulse return over a large signal range and can digitize the full waveform of the backscattered energy. Because of this ability to capture the full waveform, the EAARL system can map features such as coral reefs, beaches, coastal vegetation, and trees, where extreme variations in the laser backscatter are caused by different physical and optical characteristics. Post-processing of the EAARL data is accomplished using the Airborne Lidar Processing System (ALPS) (Nayegandhi and others, 2009). In ALPS, the waveform of the lidar is analyzed and split into first and last returns. The 'first returns' are indicative of vegetation-canopy height, or bare ground in the absence of vegetation, whereas 'last returns' typically represent 'bare-earth' elevations under vegetation. To test the accuracy of the first-return and bare-earth EAARL data, topographic and vegetation height surveys were conducted in the Chandeleur Islands, concurrent with an EAARL lidar survey and an aerial oblique-photographic survey from September 20 to 27, 2006. The Chandeleur Islands are a north-south-oriented chain of low-lying islands located approximately 100 kilometers east of the city of New Orleans, Louisiana. The islands are narrow north-south strips of land with marsh on the landward (west sides) and sandy beaches on their gulfward (east sides). Prior to Hurricane Katrina, which made landfall at Buras, Louisiana, as a Category 3 storm on August 29, 2005, prominent, 3- to 4-meter-high sand dunes were present in the northern Chandeleurs. The storm removed them, leaving post-storm island elevations of generally less than 2 meters above 0.0 NAVD88. This report is part of a study of the impact of Hurricane Katrina on the Chandeleur Islands using pre-storm and post-storm lidar surveys to detect morphological changes. The islands lost over 80 percent of their land area during Hurricane Katrina, and in the first 2 years following Katrina, many of the islands experienced continued shoreline retreat (Sallenger and others, 2007). In addition to land-area losses, the loss of dunes made the islands increasingly vulnerable to future storm impacts. The U.S. Geological Survey, along with partners in the Louisiana Department of Natural Resources and the U.S. Army Corps of Engineers, continues to monitor changes in shoreline position, land area, and elevation in the Chandeleur Islands.
Algorithmic Approaches for Place Recognition in Featureless, Walled Environments
2015-01-01
inertial measurement unit LIDAR light detection and ranging RANSAC random sample consensus SLAM simultaneous localization and mapping SUSAN smallest...algorithm 38 21 Typical input image for general junction based algorithm 39 22 Short exposure image of hallway junction taken by LIDAR 40 23...discipline of simultaneous localization and mapping ( SLAM ) has been studied intensively over the past several years. Many technical approaches
Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Lin, C. T.
1989-01-01
The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.
Simulation of finite-strain inelastic phenomena governed by creep and plasticity
NASA Astrophysics Data System (ADS)
Li, Zhen; Bloomfield, Max O.; Oberai, Assad A.
2017-11-01
Inelastic mechanical behavior plays an important role in many applications in science and engineering. Phenomenologically, this behavior is often modeled as plasticity or creep. Plasticity is used to represent the rate-independent component of inelastic deformation and creep is used to represent the rate-dependent component. In several applications, especially those at elevated temperatures and stresses, these processes occur simultaneously. In order to model these process, we develop a rate-objective, finite-deformation constitutive model for plasticity and creep. The plastic component of this model is based on rate-independent J_2 plasticity, and the creep component is based on a thermally activated Norton model. We describe the implementation of this model within a finite element formulation, and present a radial return mapping algorithm for it. This approach reduces the additional complexity of modeling plasticity and creep, over thermoelasticity, to just solving one nonlinear scalar equation at each quadrature point. We implement this algorithm within a multiphysics finite element code and evaluate the consistent tangent through automatic differentiation. We verify and validate the implementation, apply it to modeling the evolution of stresses in the flip chip manufacturing process, and test its parallel strong-scaling performance.
Development of Elevation and Relief Databases for ICESat-2/ATLAS Receiver Algorithms
NASA Astrophysics Data System (ADS)
Leigh, H. W.; Magruder, L. A.; Carabajal, C. C.; Saba, J. L.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.
2013-12-01
The Advanced Topographic Laser Altimeter System (ATLAS) is planned to launch onboard NASA's ICESat-2 spacecraft in 2016. ATLAS operates at a wavelength of 532 nm with a laser repeat rate of 10 kHz and 6 individual laser footprints. The satellite will be in a 500 km, 91-day repeat ground track orbit at an inclination of 92°. A set of onboard Receiver Algorithms has been developed to reduce the data volume and data rate to acceptable levels while still transmitting the relevant ranging data. The onboard algorithms limit the data volume by distinguishing between surface returns and background noise and selecting a small vertical region around the surface return to be included in telemetry. The algorithms make use of signal processing techniques, along with three databases, the Digital Elevation Model (DEM), the Digital Relief Map (DRM), and the Surface Reference Mask (SRM), to find the signal and determine the appropriate dynamic range of vertical data surrounding the surface for downlink. The DEM provides software-based range gating for ATLAS. This approach allows the algorithm to limit the surface signal search to the vertical region between minimum and maximum elevations provided by the DEM (plus some margin to account for uncertainties). The DEM is constructed in a nested, three-tiered grid to account for a hardware constraint limiting the maximum vertical range to 6 km. The DRM is used to select the vertical width of the telemetry band around the surface return. The DRM contains global values of relief calculated along 140 m and 700 m ground track segments consistent with a 92° orbit. The DRM must contain the maximum value of relief seen in any given area, but must be as close to truth as possible as the DRM directly affects data volume. The SRM, which has been developed independently from the DEM and DRM, is used to set parameters within the algorithm and select telemetry bands for downlink. Both the DEM and DRM are constructed from publicly available digital elevation models. No elevation models currently exist that provide global coverage at a sufficient resolution, so several regional models have been mosaicked together to produce global databases. In locations where multiple data sets are available, evaluations have been made to determine the optimal source for the databases, primarily based on resolution and accuracy. Separate procedures for calculating relief were developed for high latitude (>60N/S) regions in order to take advantage of polar stereographic projections. An additional method for generating the databases was developed for use over Antarctica, such that high resolution, regional elevation models can be easily incorporated as they become available in the future. The SRM is used to facilitate DEM and DRM production by defining those regions that are ocean and sea ice. Ocean and sea ice elevation values are defined by the geoid, while relief is set to a constant value. Results presented will include the details of data source selection, the methodologies used to create the databases, and the final versions of both the DEM and DRM databases. Companion presentations by McGarry, et al. and Carabajal, et al. describe the ATLAS onboard Receiver Algorithms and the database verification, respectively.
A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.
Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M
2018-04-30
Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
2006-01-01
information of the robot (Figure 1) acquired via laser-based localization techniques. The results are maps of the global soundscape . The algorithmic...environments than noise maps. Furthermore, provided the acoustic localization algorithm can detect the sources, the soundscape can be mapped with many...gathering information about the auditory soundscape in which it is working. In addition to robustness in the presence of noise, it has also been
The Improved Locating Algorithm of Particle Filter Based on ROS Robot
NASA Astrophysics Data System (ADS)
Fang, Xun; Fu, Xiaoyang; Sun, Ming
2018-03-01
This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
The evolving Alaska mapping program.
Brooks, P.D.; O'Brien, T. J.
1986-01-01
This paper describes the development of mapping in Alaska, the current status of the National Mapping Program, and future plans for expanding and improving the mapping coverage. Research projects with Landsat Multispectral Scanner and Return Vidicon imagery and real- and synthetic-aperture radar; image mapping programs; digital mapping; remote sensing projects; the Alaska National Interest Lands Conservation Act; and the Alaska High-Altitude Aerial Photography Program are also discussed.-from Authors
Evaluation of an experimental LiDAR for surveying a shallow, braided, sand-bedded river
Kinzel, P.J.; Wright, C.W.; Nelson, J.M.; Burman, A.R.
2007-01-01
Reaches of a shallow (<1.0m), braided, sand-bedded river were surveyed in 2002 and 2005 with the National Aeronautics and Space Administration's Experimental Advanced Airborne Research LiDAR (EAARL) and concurrently with conventional survey-grade, real-time kinematic, global positioning system technology. The laser pulses transmitted by the EAARL instrument and the return backscatter waveforms from exposed sand and submerged sand targets in the river were completely digitized and stored for postflight processing. The vertical mapping accuracy of the EAARL was evaluated by comparing the ellipsoidal heights computed from ranging measurements made using an EAARL terrestrial algorithm to nearby (<0.5m apart) ground-truth ellipsoidal heights. After correcting for apparent systematic bias in the surveys, the root mean square error of these heights with the terrestrial algorithm in the 2002 survey was 0.11m for the 26 measurements taken on exposed sand and 0.18m for the 59 measurements taken on submerged sand. In the 2005 survey, the root mean square error was 0.18m for 92 measurements taken on exposed sand and 0.24m for 434 measurements on submerged sand. In submerged areas the waveforms were complicated by reflections from the surface, water column entrained turbidity, and potentially the riverbed. When applied to these waveforms, especially in depths greater than 0.4m, the terrestrial algorithm calculated the range above the riverbed. A bathymetric algorithm has been developed to approximate the position of the riverbed in these convolved waveforms and preliminary results are encouraging. ?? 2007 ASCE.
NASA Astrophysics Data System (ADS)
Scher, C.; Tennant, C.; Larsen, L.; Bellugi, D. G.
2016-12-01
Advances in remote-sensing technology allow for cost-effective, accurate, high-resolution mapping of river-channel topography and shallow aquatic bathymetry over large spatial scales. A combination of near-infrared and green spectra airborne laser swath mapping was used to map river channel bathymetry and watershed geometry over 90+ river-kilometers (75-1175 km2) of the Greys River in Wyoming. The day of flight wetted channel was identified from green LiDAR returns, and more than 1800 valley-bottom cross-sections were extracted at regular 50-m intervals. The bankfull channel geometry was identified using a "watershed-based" algorithm that incrementally filled local minima to a "spill" point, thereby constraining areas of local convergence and delineating all the potential channels along the cross-section for each distinct "spill stage." Multiple potential channels in alluvial floodplains and lack of clearly defined channel banks in bedrock reaches challenge identification of the bankfull channel based on topology alone. Here we combine a variety of topological measures, geometrical considerations, and stage levels to define a stage-dependent bankfull channel geometry, and compare the results with day of flight wetted channel data. Initial results suggest that channel hydraulic geometry and basin hydrology power-law scaling may not accurately capture downstream channel adjustments for rivers draining complex mountain topography.
The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhou, Liqing
2015-12-01
With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.
NASA Astrophysics Data System (ADS)
He, Yaoyao; Yang, Shanlin; Xu, Qifa
2013-07-01
In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.
A MAP blind image deconvolution algorithm with bandwidth over-constrained
NASA Astrophysics Data System (ADS)
Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong
2018-03-01
We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.
NASA Astrophysics Data System (ADS)
Liu, Zeyu; Xia, Tiecheng; Wang, Jinbo
2018-03-01
We propose a new fractional two-dimensional triangle function combination discrete chaotic map (2D-TFCDM) with the discrete fractional difference. Moreover, the chaos behaviors of the proposed map are observed and the bifurcation diagrams, the largest Lyapunov exponent plot, and the phase portraits are derived, respectively. Finally, with the secret keys generated by Menezes–Vanstone elliptic curve cryptosystem, we apply the discrete fractional map into color image encryption. After that, the image encryption algorithm is analyzed in four aspects and the result indicates that the proposed algorithm is more superior than the other algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61072147 and 11271008).
Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories
NASA Technical Reports Server (NTRS)
Burchett, Bradley T.
2003-01-01
The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.
2017-09-01
Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.
Environmental Optimization Using the WAste Reduction Algorithm (WAR)
Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...
Evaluating progressive-rendering algorithms in appearance design tasks.
Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio
2013-01-01
Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.
A simple algorithm for large-scale mapping of evergreen forests in tropical America, Africa and Asia
Xiangming Xiao; Chandrashekhar M. Biradar; Christina Czarnecki; Tunrayo Alabi; Michael Keller
2009-01-01
The areal extent and spatial distribution of evergreen forests in the tropical zones are important for the study of climate, carbon cycle and biodiversity. However, frequent cloud cover in the tropical regions makes mapping evergreen forests a challenging task. In this study we developed a simple and novel mapping algorithm that is based on the temporal profile...
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An Algorithm for Testing the Efficient Market Hypothesis
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148
Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan
2015-01-01
Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.
Apriori Versions Based on MapReduce for Mining Frequent Patterns on Big Data.
Luna, Jose Maria; Padillo, Francisco; Pechenizkiy, Mykola; Ventura, Sebastian
2017-09-27
Pattern mining is one of the most important tasks to extract meaningful and useful information from raw data. This task aims to extract item-sets that represent any type of homogeneity and regularity in data. Although many efficient algorithms have been developed in this regard, the growing interest in data has caused the performance of existing pattern mining techniques to be dropped. The goal of this paper is to propose new efficient pattern mining algorithms to work in big data. To this aim, a series of algorithms based on the MapReduce framework and the Hadoop open-source implementation have been proposed. The proposed algorithms can be divided into three main groups. First, two algorithms [Apriori MapReduce (AprioriMR) and iterative AprioriMR] with no pruning strategy are proposed, which extract any existing item-set in data. Second, two algorithms (space pruning AprioriMR and top AprioriMR) that prune the search space by means of the well-known anti-monotone property are proposed. Finally, a last algorithm (maximal AprioriMR) is also proposed for mining condensed representations of frequent patterns. To test the performance of the proposed algorithms, a varied collection of big data datasets have been considered, comprising up to 3 · 10#x00B9;⁸ transactions and more than 5 million of distinct single-items. The experimental stage includes comparisons against highly efficient and well-known pattern mining algorithms. Results reveal the interest of applying MapReduce versions when complex problems are considered, and also the unsuitability of this paradigm when dealing with small data.
MOLA II Laser Transmitter Calibration and Performance. 1.2
NASA Technical Reports Server (NTRS)
Afzal, Robert S.; Smith, David E. (Technical Monitor)
1997-01-01
The goal of the document is to explain the algorithm for determining the laser output energy from the telemetry data within the return packets from MOLA II. A simple algorithm is developed to convert the raw start detector data into laser energy, measured in millijoules. This conversion is dependent on three variables, start detector counts, array heat sink temperature and start detector temperature. All these values are contained within the return packets. The conversion is applied to the GSFC Thermal Vacuum data as well as the in-space data to date and shows good correlation.
Mapping bathymetry in an active surf zone with the WorldView2 multispectral satellite
NASA Astrophysics Data System (ADS)
Trimble, S. M.; Houser, C.; Brander, R.; Chirico, P.
2015-12-01
Rip currents are strong, narrow seaward flows of water that originate in the surf zones of many global beaches. They are related to hundreds of international drownings each year, but exact numbers are difficult to calculate due to logistical difficulties in obtaining accurate incident reports. Annual average rip current fatalities are estimated to be ~100, 53 and 21 in the United States (US), Costa Rica, and Australia respectively. Current warning systems (e.g. National Weather Service) do not account for fine resolution nearshore bathymetry because it is difficult to capture. The method shown here could provide frequent, high resolution maps of nearshore bathymetry at a scale required for improved rip prediction and warning. This study demonstrates a method for mapping bathymetry in the surf zone (20m deep and less), specifically within rip channels, because rips form at topographically low spots in the bathymetry as a result of feedback amongst waves, substrate, and antecedent bathymetry. The methods employ the Digital Globe WorldView2 (WV2) multispectral satellite and field measurements of depth to generate maps of the changing bathymetry at two embayed, rip-prone beaches: Playa Cocles, Puerto Viejo de Talamanca, Costa Rica, and Bondi Beach, Sydney, Australia. WV2 has a 1.1 day pass-over rate with 1.84m ground pixel resolution of 8 bands, including 'yellow' (585-625 nm) and 'coastal blue' (400-450 nm). The data is used to classify bottom type and to map depth to the return in multiple bands. The methodology is tested at each site for algorithm consistency between dates, and again for applicability between sites.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
NASA Astrophysics Data System (ADS)
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
An algorithm for automated layout of process description maps drawn in SBGN.
Genc, Begum; Dogrusoz, Ugur
2016-01-01
Evolving technology has increased the focus on genomics. The combination of today's advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
An algorithm for automated layout of process description maps drawn in SBGN
Genc, Begum; Dogrusoz, Ugur
2016-01-01
Motivation: Evolving technology has increased the focus on genomics. The combination of today’s advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. Results: We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. Availability and implementation: An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). Contact: ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26363029
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
Development of a Two-Wheel Contingency Mode for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
Starin, Scott R.; ODonnell, James R., Jr.; Bauer, Frank (Technical Monitor)
2002-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on mission to the Cosmic Background Explorer (COBE), and is currently collecting data from its orbit near the second Sun-Earth libration point. Due to limited mass, power, and financial resources, a traditional reliability concept including fully redundant components was not feasible for MAP. Instead, the MAP design employs selective hardware redundancy in tandem with contingency software modes and algorithms to improve the odds of mission success. One direction for such improvement has been the development of a two-wheel backup control strategy. This strategy would allow MAP to position itself for maneuvers and collect science data should one of its three reaction wheels fail. Along with operational considerations, the strategy includes three new control algorithms. These algorithms would use the remaining attitude control actuators-thrusters and two reaction wheels-in ways that achieve control goals while minimizing adverse impacts on the functionality of other subsystems and software.
Maps Showing Seismic Landslide Hazards in Anchorage, Alaska
Jibson, Randall W.; Michael, John A.
2009-01-01
The devastating landslides that accompanied the great 1964 Alaska earthquake showed that seismically triggered landslides are one of the greatest geologic hazards in Anchorage. Maps quantifying seismic landslide hazards are therefore important for planning, zoning, and emergency-response preparation. The accompanying maps portray seismic landslide hazards for the following conditions: (1) deep, translational landslides, which occur only during great subduction-zone earthquakes that have return periods of =~300-900 yr; (2) shallow landslides for a peak ground acceleration (PGA) of 0.69 g, which has a return period of 2,475 yr, or a 2 percent probability of exceedance in 50 yr; and (3) shallow landslides for a PGA of 0.43 g, which has a return period of 475 yr, or a 10 percent probability of exceedance in 50 yr. Deep, translational landslide hazard zones were delineated based on previous studies of such landslides, with some modifications based on field observations of locations of deep landslides. Shallow-landslide hazards were delineated using a Newmark-type displacement analysis for the two probabilistic ground motions modeled.
Maps showing seismic landslide hazards in Anchorage, Alaska
Jibson, Randall W.
2014-01-01
The devastating landslides that accompanied the great 1964 Alaska earthquake showed that seismically triggered landslides are one of the greatest geologic hazards in Anchorage. Maps quantifying seismic landslide hazards are therefore important for planning, zoning, and emergency-response preparation. The accompanying maps portray seismic landslide hazards for the following conditions: (1) deep, translational landslides, which occur only during great subduction-zone earthquakes that have return periods of =300-900 yr; (2) shallow landslides for a peak ground acceleration (PGA) of 0.69 g, which has a return period of 2,475 yr, or a 2 percent probability of exceedance in 50 yr; and (3) shallow landslides for a PGA of 0.43 g, which has a return period of 475 yr, or a 10 percent probability of exceedance in 50 yr. Deep, translational landslide hazards were delineated based on previous studies of such landslides, with some modifications based on field observations of locations of deep landslides. Shallow-landslide hazards were delineated using a Newmark-type displacement analysis for the two probabilistic ground motions modeled.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
NASA Astrophysics Data System (ADS)
Oza, Nikunj
2012-03-01
A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. A set of training examples— examples with known output values—is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate’s measurements. The generalization performance of a learned model (how closely the target outputs and the model’s predicted outputs agree for patterns that have not been presented to the learning algorithm) would provide an indication of how well the model has learned the desired mapping. More formally, a classification learning algorithm L takes a training set T as its input. The training set consists of |T| examples or instances. It is assumed that there is a probability distribution D from which all training examples are drawn independently—that is, all the training examples are independently and identically distributed (i.i.d.). The ith training example is of the form (x_i, y_i), where x_i is a vector of values of several features and y_i represents the class to be predicted.* In the sunspot classification example given above, each training example would represent one sunspot’s classification (y_i) and the corresponding set of measurements (x_i). The output of a supervised learning algorithm is a model h that approximates the unknown mapping from the inputs to the outputs. In our example, h would map from the sunspot measurements to the type of sunspot. We may have a test set S—a set of examples not used in training that we use to test how well the model h predicts the outputs on new examples. Just as with the examples in T, the examples in S are assumed to be independent and identically distributed (i.i.d.) draws from the distribution D. We measure the error of h on the test set as the proportion of test cases that h misclassifies: 1/|S| Sigma(x,y union S)[I(h(x)!= y)] where I(v) is the indicator function—it returns 1 if v is true and 0 otherwise. In our sunspot classification example, we would identify additional examples of sunspots that were not used in generating the model, and use these to determine how accurate the model is—the fraction of the test samples that the model classifies correctly. An example of a classification model is the decision tree shown in Figure 23.1. We will discuss the decision tree learning algorithm in more detail later—for now, we assume that, given a training set with examples of sunspots, this decision tree is derived. This can be used to classify previously unseen examples of sunpots. For example, if a new sunspot’s inputs indicate that its "Group Length" is in the range 10-15, then the decision tree would classify the sunspot as being of type “E,” whereas if the "Group Length" is "NULL," the "Magnetic Type" is "bipolar," and the "Penumbra" is "rudimentary," then it would be classified as type "C." In this chapter, we will add to the above description of classification problems. We will discuss decision trees and several other classification models. In particular, we will discuss the learning algorithms that generate these classification models, how to use them to classify new examples, and the strengths and weaknesses of these models. We will end with pointers to further reading on classification methods applied to astronomy data.
Removal of instrument signature from Mariner 9 television images of Mars
NASA Technical Reports Server (NTRS)
Green, W. B.; Jepsen, P. L.; Kreznar, J. E.; Ruiz, R. M.; Schwartz, A. A.; Seidman, J. B.
1975-01-01
The Mariner 9 spacecraft was inserted into orbit around Mars in November 1971. The two vidicon camera systems returned over 7300 digital images during orbital operations. The high volume of returned data and the scientific objectives of the Television Experiment made development of automated digital techniques for the removal of camera system-induced distortions from each returned image necessary. This paper describes the algorithms used to remove geometric and photometric distortions from the returned imagery. Enhancement processing of the final photographic products is also described.
Design of an Autonomous Underwater Vehicle to Calibrate the Europa Clipper Ice-Penetrating Radar
NASA Astrophysics Data System (ADS)
Stone, W.; Siegel, V.; Kimball, P.; Richmond, K.; Flesher, C.; Hogan, B.; Lelievre, S.
2013-12-01
Jupiter's moon Europa has been prioritized as the target for the Europa Clipper flyby mission. A key science objective for the mission is to remotely characterize the ice shell and any subsurface water, including their heterogeneity, and the nature of surface-ice-ocean exchange. This objective is a critical component of the mission's overarching goal of assessing the habitability of Europa. The instrument targeted for addressing key aspects of this goal is an ice-penetrating radar (IPR). As a primary goal of our work, we will tightly couple airborne IPR studies of the Ross Ice Shelf by the Europa Clipper radar team with ground-truth data to be obtained from sub-glacial sonar and bio-geochemical mapping of the corresponding ice-water and water-rock interfaces using an advanced autonomous underwater vehicle (AUV). The ARTEMIS vehicle - a heavily morphed long-range, low drag variant of the highly successful 4-degree-of-freedom hovering sub-ice ENDURANCE bot -- will be deployed from a sea-ice drill hole adjacent the McMurdo Ice Shelf (MIS) and will perform three classes of missions. The first includes original exploration and high definition mapping of both the ice-water interface and the benthic interface on a length scale (approximately 10 kilometers under-ice penetration radius) that will definitively tie it to the synchronous airborne IPR over-flights. These exploration and mapping missions will be conducted at up to 10 different locations along the MIS in order to capture varying ice thickness and seawater intrusion into the ice shelf. Following initial mapping characterization, the vehicle will conduct astrobiology-relevant proximity operations using bio-assay sensors (custom-designed UV fluorescence and machine-vision-processed optical imagery) followed by point-targeted studies at regions of interest. Sample returns from the ice-water interface will be triggered autonomously using real-time-processed instrument data and onboard decision-to-collect algorithms. ARTEMIS will be capable of conducting precision hovering proximity science in an unexplored environment, followed by high speed (1.5 m/s) return to the melt hole. The navigation system will significantly advance upon the successes of the prior DEPTHX and ENDURANCE systems and several novel pose-drift correction technologies will be developed and tested under ice during the project. The method of down-hole deployment and auto-docking return will be extended to a vertically-deployed, horizontally-recovered concept that is depth independent and highly relevant to an ice-water deployment on an icy moon. The presentation will discuss the mission down-select architecture for the ARTEMIS vehicle and its implications for the design of a Europa 'fast mover' carrier AUV, the onboard instrument suite, and the Antarctic mission CONOPS. The vehicle and crew will deploy to Antarctica in the 2015/2016 season.
Improving the MODIS Global Snow-Mapping Algorithm
NASA Technical Reports Server (NTRS)
Klein, Andrew G.; Hall, Dorothy K.; Riggs, George A.
1997-01-01
An algorithm (Snowmap) is under development to produce global snow maps at 500 meter resolution on a daily basis using data from the NASA MODIS instrument. MODIS, the Moderate Resolution Imaging Spectroradiometer, will be launched as part of the first Earth Observing System (EOS) platform in 1998. Snowmap is a fully automated, computationally frugal algorithm that will be ready to implement at launch. Forests represent a major limitation to the global mapping of snow cover as a forest canopy both obscures and shadows the snow underneath. Landsat Thematic Mapper (TM) and MODIS Airborne Simulator (MAS) data are used to investigate the changes in reflectance that occur as a forest stand becomes snow covered and to propose changes to the Snowmap algorithm that will improve snow classification accuracy forested areas.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486
Virtual Network Embedding via Monte Carlo Tree Search.
Haeri, Soroush; Trajkovic, Ljiljana
2018-02-01
Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.
NASA Astrophysics Data System (ADS)
Domino, Krzysztof
2017-02-01
The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent. Remark that the algorithm is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable, what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.
Seagrass Identification Using High-Resolution 532nm Bathymetric LiDAR and Hyperspectral Imagery
NASA Astrophysics Data System (ADS)
Pan, Z.; Prasad, S.; Starek, M. J.; Fernandez Diaz, J. C.; Glennie, C. L.; Carter, W. E.; Shrestha, R. L.; Singhania, A.; Gibeaut, J. C.
2013-12-01
Seagrass provides vital habitat for marine fisheries and is a key indicator species of coastal ecosystem vitality. Monitoring seagrass is therefore an important environmental initiative, but measuring details of seagrass distribution over large areas via remote sensing has proved challenging. Developments in airborne bathymetric light detection and ranging (LiDAR) provide great potential in this regard. Traditional bathymetric LiDAR systems have been limited in their ability to map within the shallow water zone (< 1 m) where seagrass is typically present due to limitations in receiver response and laser pulse length. Emergent short-pulse width bathymetric LiDAR sensors and waveform processing algorithms enable depth measurements in shallow water environments previously inaccessible. This 3D information of the benthic layer can be applied to detect seagrass and characterize its distribution. Researchers with the National Center for Airborne Laser Mapping (NCALM) at the University of Houston (UH) and the Coastal and Marine Geospatial Sciences Lab (CMGL) of the Harte Research Institute at Texas A&M University-Corpus Christi conducted a coordinated airborne and boat-based survey of the Redfish Bay State Scientific Area as part of a collaborative study to investigate the capabilities of bathymetric LiDAR and hyperspectral imaging for seagrass mapping. Redfish Bay, located along the middle Texas coast of the Gulf of Mexico, is a state scientific area designated for the purpose of protecting and studying native seagrasses. Redfish Bay is part of the broader Coastal Bend Bays estuary system recognized by the US Environmental Protection Agency (EPA) as a national estuary of significance. For this survey, UH acquired high-resolution discrete-return and full-waveform bathymetric data using their Optech Aquarius 532 nm green LiDAR. In a separate flight, UH collected 2 sets of hyperspectral imaging data (1.2-m pixel resolution and 72 bands, and 0.6m pixel resolution and 36 bands) with their CASI 1500 hyperspectral sensor. The ground survey was conducted by CMGL. The team used an airboat to collect in-situ radiometer measurements of sky irradiance and surface water reflectance at different locations in the bay. The team also collected water samples, GPS position, and depth. A follow-up survey was conducted to acquire ground-truth data of benthic type at over 80 locations within the bay. Two complementary approaches were developed to detect and map the seagrass cover over the study area - automated classification algorithms were validated with high spatial resolution hyperspectral imagery, and a continuous wavelet based signal processing and pulse broadening analysis of the digitized returns was performed with the full waveform of the bathymetric LiDAR. The two approaches were compared to the collected ground truth data of seagrass type, height, and location. Results of the evaluation will be presented, along with a preliminary discussion of the fusion of the LiDAR and hyperspectral imagery for improved overall classification accuracy.
Distance-Based Phylogenetic Methods Around a Polytomy.
Davidson, Ruth; Sullivant, Seth
2014-01-01
Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.
Mapreduce is Good Enough? If All You Have is a Hammer, Throw Away Everything That's Not a Nail!
Lin, Jimmy
2013-03-01
Hadoop is currently the large-scale data analysis "hammer" of choice, but there exist classes of algorithms that aren't "nails" in the sense that they are not particularly amenable to the MapReduce programming model. To address this, researchers have proposed MapReduce extensions or alternative programming models in which these algorithms can be elegantly expressed. This article espouses a very different position: that MapReduce is "good enough," and that instead of trying to invent screwdrivers, we should simply get rid of everything that's not a nail. To be more specific, much discussion in the literature surrounds the fact that iterative algorithms are a poor fit for MapReduce. The simple solution is to find alternative, noniterative algorithms that solve the same problem. This article captures my personal experiences as an academic researcher as well as a software engineer in a "real-world" production analytics environment. From this combined perspective, I reflect on the current state and future of "big data" research.
Liu, Zhong; Gao, Xiaoguang; Fu, Xiaowei
2018-05-08
In this paper, we mainly study a cooperative search and coverage algorithm for a given bounded rectangle region, which contains several unknown stationary targets, by a team of unmanned aerial vehicles (UAVs) with non-ideal sensors and limited communication ranges. Our goal is to minimize the search time, while gathering more information about the environment and finding more targets. For this purpose, a novel cooperative search and coverage algorithm with controllable revisit mechanism is presented. Firstly, as the representation of the environment, the cognitive maps that included the target probability map (TPM), the uncertain map (UM), and the digital pheromone map (DPM) are constituted. We also design a distributed update and fusion scheme for the cognitive map. This update and fusion scheme can guarantee that each one of the cognitive maps converges to the same one, which reflects the targets’ true existence or absence in each cell of the search region. Secondly, we develop a controllable revisit mechanism based on the DPM. This mechanism can concentrate the UAVs to revisit sub-areas that have a large target probability or high uncertainty. Thirdly, in the frame of distributed receding horizon optimizing, a path planning algorithm for the multi-UAVs cooperative search and coverage is designed. In the path planning algorithm, the movement of the UAVs is restricted by the potential fields to meet the requirements of avoiding collision and maintaining connectivity constraints. Moreover, using the minimum spanning tree (MST) topology optimization strategy, we can obtain a tradeoff between the search coverage enhancement and the connectivity maintenance. The feasibility of the proposed algorithm is demonstrated by comparison simulations by way of analyzing the effects of the controllable revisit mechanism and the connectivity maintenance scheme. The Monte Carlo method is employed to validate the influence of the number of UAVs, the sensing radius, the detection and false alarm probabilities, and the communication range on the proposed algorithm.
Boyer, Nicole R S; Miller, Sarah; Connolly, Paul; McIntosh, Emma
2016-04-01
The Strengths and Difficulties Questionnaire (SDQ) is a behavioural screening tool for children. The SDQ is increasingly used as the primary outcome measure in population health interventions involving children, but it is not preference based; therefore, its role in allocative economic evaluation is limited. The Child Health Utility 9D (CHU9D) is a generic preference-based health-related quality of-life measure. This study investigates the applicability of the SDQ outcome measure for use in economic evaluations and examines its relationship with the CHU9D by testing previously published mapping algorithms. The aim of the paper is to explore the feasibility of using the SDQ within economic evaluations of school-based population health interventions. Data were available from children participating in a cluster randomised controlled trial of the school-based roots of empathy programme in Northern Ireland. Utility was calculated using the original and alternative CHU9D tariffs along with two SDQ mapping algorithms. t tests were performed for pairwise differences in utility values from the preference-based tariffs and mapping algorithms. Mean (standard deviation) SDQ total difficulties and prosocial scores were 12 (3.2) and 8.3 (2.1). Utility values obtained from the original tariff, alternative tariff, and mapping algorithms using five and three SDQ subscales were 0.84 (0.11), 0.80 (0.13), 0.84 (0.05), and 0.83 (0.04), respectively. Each method for calculating utility produced statistically significantly different values except the original tariff and five SDQ subscale algorithm. Initial evidence suggests the SDQ and CHU9D are related in some of their measurement properties. The mapping algorithm using five SDQ subscales was found to be optimal in predicting mean child health utility. Future research valuing changes in the SDQ scores would contribute to this research.
Towards Unmanned Systems for Dismounted Operations in the Canadian Forces
2011-01-01
LIDAR , and RADAR) and lower power/mass, passive imaging techniques such as structure from motion and simultaneous localisation and mapping ( SLAM ...sensors and learning algorithms. 5.1.2 Simultaneous localisation and mapping SLAM algorithms concurrently estimate a robot pose and a map of unique...locations and vehicle pose are part of the SLAM state vector and are estimated in each update step. AISS developed a monocular camera-based SLAM
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
Biomedical Terminology Mapper for UML projects.
Thibault, Julien C; Frey, Lewis
2013-01-01
As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies.
Biomedical Terminology Mapper for UML projects
Thibault, Julien C.; Frey, Lewis
As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies. PMID:24303278
Texture Analysis of Chaotic Coupled Map Lattices Based Image Encryption Algorithm
NASA Astrophysics Data System (ADS)
Khan, Majid; Shah, Tariq; Batool, Syeda Iram
2014-09-01
As of late, data security is key in different enclosures like web correspondence, media frameworks, therapeutic imaging, telemedicine and military correspondence. In any case, a large portion of them confronted with a few issues, for example, the absence of heartiness and security. In this letter, in the wake of exploring the fundamental purposes of the chaotic trigonometric maps and the coupled map lattices, we have presented the algorithm of chaos-based image encryption based on coupled map lattices. The proposed mechanism diminishes intermittent impact of the ergodic dynamical systems in the chaos-based image encryption. To assess the security of the encoded image of this scheme, the association of two nearby pixels and composition peculiarities were performed. This algorithm tries to minimize the problems arises in image encryption.
Hernandez, Penni; Podchiyska, Tanya; Weber, Susan; Ferris, Todd; Lowe, Henry
2009-11-14
The Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse integrates medication information from two Stanford hospitals that use different drug representation systems. To merge this pharmacy data into a single, standards-based model supporting research we developed an algorithm to map HL7 pharmacy orders to RxNorm concepts. A formal evaluation of this algorithm on 1.5 million pharmacy orders showed that the system could accurately assign pharmacy orders in over 96% of cases. This paper describes the algorithm and discusses some of the causes of failures in mapping to RxNorm.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
A Multifactorial, Criteria-based Progressive Algorithm for Hamstring Injury Treatment.
Mendiguchia, Jurdan; Martinez-Ruiz, Enrique; Edouard, Pascal; Morin, Jean-Benoît; Martinez-Martinez, Francisco; Idoate, Fernando; Mendez-Villanueva, Alberto
2017-07-01
Given the prevalence of hamstring injuries in football, a rehabilitation program that effectively promotes muscle tissue repair and functional recovery is paramount to minimize reinjury risk and optimize player performance and availability. This study aimed to assess the concurrent effectiveness of administering an individualized and multifactorial criteria-based algorithm (rehabilitation algorithm [RA]) on hamstring injury rehabilitation in comparison with using a general rehabilitation protocol (RP). Implementing a double-blind randomized controlled trial approach, two equal groups of 24 football players (48 total) completed either an RA group or a validated RP group 5 d after an acute hamstring injury. Within 6 months after return to sport, six hamstring reinjuries occurred in RP versus one injury in RA (relative risk = 6, 90% confidence interval = 1-35; clinical inference: very likely beneficial effect). The average duration of return to sport was possibly quicker (effect size = 0.34 ± 0.42) in RP (23.2 ± 11.7 d) compared with RA (25.5 ± 7.8 d) (-13.8%, 90% confidence interval = -34.0% to 3.4%; clinical inference: possibly small effect). At the time to return to sport, RA players showed substantially better 10-m time, maximal sprinting speed, and greater mechanical variables related to speed (i.e., maximum theoretical speed and maximal horizontal power) than the RP. Although return to sport was slower, male football players who underwent an individualized, multifactorial, criteria-based algorithm with a performance- and primary risk factor-oriented training program from the early stages of the process markedly decreased the risk of reinjury compared with a general protocol where long-length strength training exercises were prioritized.
Algorithm for Management of the Refractive Aerosinusitis Patient.
Boston, Andrew G; McMains, K Christopher; Chen, Philip G; Weitzel, Erik K
2018-02-06
For some career military aviators, their ability to continue on flight status is limited by the pressure and pain of aerosinusitis, which is present only while in the flying environment. Failure to treat their disease process can mean the end of their flying careers and the loss of valuable assets trained with taxpayer dollars. Because some medications commonly used in treatment of sinus diseases are not allowed in aviation, this presents a unique problem for their medical management. Surgical treatment must be aimed at treating to symptom relief and not solely disease mitigation. One alternative is operating "beyond the scope of disease" present during a one-atmosphere clinic visit. A case series of nine career aviators with aerosinusitis treated at one academic military Otolaryngology department in a tertiary care facility. Results from a treatment algorithm that balances symptomatology and staged surgical intervention are reviewed. The primary endpoint was return to flight duty. For patients treated according to this algorithm, the mean time to return to flight duty was 3.8 mo, requiring an average of 1.2 surgeries. To date, 100% of career aviators have returned to flight duty using this method. Refractory aerosinusitis represents a potentially career-ending medical condition for the aviator and lost training costs to the taxpayers. Using the treatment algorithm presented, 100% of aviators were able to return to flight duty; a savings of millions of dollars for taxpayers. Future work will focus on modifications to the surgical techniques to reduce the extent of surgery while maintaining satisfactory results. Additional study should be undertaken to assess generalizability of these results in the broader aviation community. © Association of Military Surgeons of the United States 2018. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Thompson, A J; Weary, D M; von Keyserlingk, M A G
2017-05-01
The electronic equipment used on farms can be creatively co-opted to collect data for which it was not originally designed. In the current study, we describe 2 novel algorithms that harvest data from electronic feeding equipment and data loggers used to record standing and lying behavior, to estimate the time that dairy cows spend away from their pen to be milked. Our 2 objectives were to (1) measure the ability of the first algorithm to estimate the time cows spend away from the pen as a group and (2) determine the capability of a second algorithm to estimate the time it takes for individual cows to return to their pen after being milked. To achieve these objectives, we conducted 2 separate experiments: first, to estimate group time away, the feeding behavior of 1 pen of 20 Holstein cows was monitored electronically for 1 mo; second, to measure individual latency to return to the pen, feeding and lying behavior of 12 healthy Holstein cows was monitored electronically from parturition to 21 d in milk. For both experiments, we monitored the time each individual cow exited the pen before each milking and when she returned to the pen after milking using video recordings. Estimates generated by our algorithms were then compared with the times captured from the video recordings. Our first algorithm provided reliable pen-based estimates for the minimum time cows spent away from the pen to be milked in the morning [coefficient of determination (R 2 ) = 0.92] and afternoon (R 2 = 0.96). The second algorithm was able to estimate of the time it took for individual cows to return to the pen after being milked in the morning (R 2 = 0.98), but less so in the afternoon (R 2 = 0.67). This study illustrates how data from electronic systems used to assess feeding and lying behavior can be mined to estimate novel measures. New work is now required to improve the estimates of our algorithm for individuals, for example by adding data from other electronic monitoring systems on the farm. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Personalized Medicine in Veterans with Traumatic Brain Injuries
2011-05-01
UPGMA algorithm with cosine correlation as the similarity metric. Results are present as a heat map (left panel) demonstrating that the panel of 18... UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as heat maps demonstrating the efficacy of using all 13
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.; Gallagher, Andrea
1992-01-01
The sedimentary sections exposed in the Canyonlands and Arches National Parks region of Utah (generally referred to as 'Canyonlands') consist of sandstones, shales, limestones, and conglomerates. Reflectance spectra of weathered surfaces of rocks from these areas show two components: (1) variations in spectrally detectable mineralogy, and (2) variations in the relative ratios of the absorption bands between minerals. Both types of information can be used together to map each major lithology and the Clark spectral features mapping algorithm is applied to do the job.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley
2015-01-15
Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less
A pseudoinverse deformation vector field generator and its applications
Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.
2010-01-01
Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247
An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques
2018-01-09
ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER
Wolf, Susan M.; Burke, Wylie; Koenig, Barbara A.
2015-01-01
Both bioethics and law have governed human genomics by distinguishing research from clinical practice. Yet the rise of translational genomics now makes this traditional dichotomy inadequate. This paper pioneers a new approach to the ethics of translational genomics. It maps the full range of ethical approaches needed, proposes a “layered” approach to determining the ethics framework for projects combining research and clinical care, and clarifies the key role that return of results can play in advancing translation. PMID:26479558
NASA Astrophysics Data System (ADS)
Glendinning, Paul
2011-12-01
Newton's cradle for two balls with Hertzian interactions is considered as a hybrid system, and this makes it possible to derive return maps for the motion between collisions in an exact form despite the fact that the three-halves interaction law cannot be solved in closed form. The return maps depend on a constant whose value can only be determined numerically, but solutions can be written down explicitly in terms of this parameter, and we compare this with the results of simulations. The results are in fact independent of the details of the interaction potential.
Wolf, Susan M; Burke, Wylie; Koenig, Barbara A
2015-01-01
Both bioethics and law have governed human genomics by distinguishing research from clinical practice. Yet the rise of translational genomics now makes this traditional dichotomy inadequate. This paper pioneers a new approach to the ethics of translational genomics. It maps the full range of ethical approaches needed, proposes a "layered" approach to determining the ethics framework for projects combining research and clinical care, and clarifies the key role that return of results can play in advancing translation. © 2015 American Society of Law, Medicine & Ethics, Inc.
EAARL Coastal Topography - Northeast Barrier Islands 2007: Bare Earth
Nayegandhi, Amar; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Yates, Xan; Bonisteel, Jamie M.
2008-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced collaboratively by the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the northeast coastal barrier islands in New York and New Jersey, acquired April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Topography - Natchez Trace Parkway 2007: First Surface
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan
2008-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Natchez Trace Parkway in Mississippi, acquired on September 14, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Topography - Vicksburg National Military Park 2008: Bare Earth
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan
2008-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Vicksburg National Military Park in Mississippi, acquired on March 6, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Coastal Topography - Northeast Barrier Islands 2007: First Surface
Nayegandhi, Amar; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Yates, Xan; Bonisteel, Jamie M.
2009-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced collaboratively by the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the northeast coastal barrier islands in New York and New Jersey, acquired April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Topography-Vicksburg National Military Park 2007: First Surface
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan
2009-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived first-surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Vicksburg National Military Park in Mississippi, acquired on September 12, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL-B coastal topography: eastern New Jersey, Hurricane Sandy, 2012: first surface
Wright, C. Wayne; Fredericks, Xan; Troche, Rodolfo J.; Klipp, Emily S.; Kranenburg, Christine J.; Nagle, David B.
2014-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida. This project provides highly detailed and accurate datasets for a portion of the New Jersey coastline beachface, acquired pre-Hurricane Sandy on October 26, and post-Hurricane Sandy on November 1 and November 5, 2012. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar system, known as the second-generation Experimental Advanced Airborne Research Lidar (EAARL-B), was used during data acquisition. The EAARL-B system is a raster-scanning, waveform-resolving, green-wavelength (532-nm) lidar designed to map nearshore bathymetry, topography, and vegetation structure simultaneously. The EAARL-B sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, down-looking red-green-blue (RGB) and infrared (IR) digital cameras, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL-B platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL-B system. The resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Web site.
EAARL Coastal Topography--Cape Canaveral, Florida, 2009: First Surface
Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Plant, Nathaniel; Wright, C.W.; Nagle, D.B.; Serafin, K.S.; Klipp, E.S.
2011-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Kennedy Space Center, FL. This project provides highly detailed and accurate datasets of a portion of the eastern Florida coastline beachface, acquired on May 28, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations.
EAARL Coastal Topography - Sandy Hook 2007
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.
2008-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of Gateway National Recreation Area's Sandy Hook Unit in New Jersey, acquired on May 16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for pre-survey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.
Langley, Jason; Zhao, Qun
2009-09-07
The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.
Evaluation of algorithms used to order markers on genetic maps.
Mollinari, M; Margarido, G R A; Vencovsky, R; Garcia, A A F
2009-12-01
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with 100 and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results.
A new chaotic multi-verse optimization algorithm for solving engineering optimization problems
NASA Astrophysics Data System (ADS)
Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella
2018-03-01
Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.
NASA Astrophysics Data System (ADS)
Hamylton, S.; Andréfouët, S.; Spencer, T.
2012-10-01
Increasing the use of geomorphological map products in marine spatial planning has the potential to greatly enhance return on mapping investment as they are commonly two orders of magnitude cheaper to produce than biologically-focussed maps of benthic communities and shallow substrates. The efficacy of geomorphological maps derived from remotely sensed imagery as surrogates for habitat diversity is explored by comparing two map sets of the platform reefs and atolls of the Amirantes Archipelago (Seychelles), Western Indian Ocean. One mapping campaign utilised Compact Airborne Spectrographic Imagery (19 wavebands, 1 m spatial resolution) to classify 11 islands and associated reefs into 25 biological habitat classes while the other campaign used Landsat 7 + ETM imagery (7 bands, 30 m spatial resolution) to generate maps of 14 geomorphic classes. The maps were compared across a range of characteristics, including habitat richness (number of classes mapped), diversity (Shannon-Weiner statistic) and thematic content (Cramer's V statistic). Between maps, a strong relationship was revealed for habitat richness (R2 = 0.76), a moderate relationship for class diversity and evenness (R2 = 0.63) and a variable relationship for thematic content, dependent on site complexity (V range 0.43-0.93). Geomorphic maps emerged as robust predictors of the habitat richness in the Amirantes. Such maps therefore demonstrate high potential value for informing coastal management activities and conservation planning by drawing on information beyond their own thematic content and thus maximizing the return on mapping investment.
Deriving health utilities from the MacNew Heart Disease Quality of Life Questionnaire.
Chen, Gang; McKie, John; Khan, Munir A; Richardson, Jeff R
2015-10-01
Quality of life is included in the economic evaluation of health services by measuring the preference for health states, i.e. health state utilities. However, most intervention studies include a disease-specific, not a utility, instrument. Consequently, there has been increasing use of statistical mapping algorithms which permit utilities to be estimated from a disease-specific instrument. The present paper provides such algorithms between the MacNew Heart Disease Quality of Life Questionnaire (MacNew) instrument and six multi-attribute utility (MAU) instruments, the Euroqol (EQ-5D), the Short Form 6D (SF-6D), the Health Utilities Index (HUI) 3, the Quality of Wellbeing (QWB), the 15D (15 Dimension) and the Assessment of Quality of Life (AQoL-8D). Heart disease patients and members of the healthy public were recruited from six countries. Non-parametric rank tests were used to compare subgroup utilities and MacNew scores. Mapping algorithms were estimated using three separate statistical techniques. Mapping algorithms achieved a high degree of precision. Based on the mean absolute error and the intra class correlation the preferred mapping is MacNew into SF-6D or 15D. Using the R squared statistic the preferred mapping is MacNew into AQoL-8D. The algorithms reported in this paper enable MacNew data to be mapped into utilities predicted from any of six instruments. This permits studies which have included the MacNew to be used in cost utility analyses which, in turn, allows the comparison of services with interventions across the health system. © The European Society of Cardiology 2014.
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Stereo-vision-based terrain mapping for off-road autonomous navigation
NASA Astrophysics Data System (ADS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-05-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
NASA Astrophysics Data System (ADS)
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.
2009-01-01
Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.
Satellite-map position estimation for the Mars rover
NASA Technical Reports Server (NTRS)
Hayashi, Akira; Dean, Thomas
1989-01-01
A method for locating the Mars rover using an elevation map generated from satellite data is described. In exploring its environment, the rover is assumed to generate a local rover-centered elevation map that can be used to extract information about the relative position and orientation of landmarks corresponding to local maxima. These landmarks are integrated into a stochastic map which is then matched with the satellite map to obtain an estimate of the robot's current location. The landmarks are not explicitly represented in the satellite map. The results of the matching algorithm correspond to a probabilistic assessment of whether or not the robot is located within a given region of the satellite map. By assigning a probabilistic interpretation to the information stored in the satellite map, researchers are able to provide a precise characterization of the results computed by the matching algorithm.
Autonomous exploration and mapping of unknown environments
NASA Astrophysics Data System (ADS)
Owens, Jason; Osteen, Phil; Fields, MaryAnne
2012-06-01
Autonomous exploration and mapping is a vital capability for future robotic systems expected to function in arbitrary complex environments. In this paper, we describe an end-to-end robotic solution for remotely mapping buildings. For a typical mapping system, an unmanned system is directed to enter an unknown building at a distance, sense the internal structure, and, barring additional tasks, while in situ, create a 2-D map of the building. This map provides a useful and intuitive representation of the environment for the remote operator. We have integrated a robust mapping and exploration system utilizing laser range scanners and RGB-D cameras, and we demonstrate an exploration and metacognition algorithm on a robotic platform. The algorithm allows the robot to safely navigate the building, explore the interior, report significant features to the operator, and generate a consistent map - all while maintaining localization.
The MAP Spacecraft Angular State Estimation After Sensor Failure
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2003-01-01
This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, the conclusions have a far reaching consequence.
The Effect of Sensor Failure on the Attitude and Rate Estimation of MAP Spacecraft
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2003-01-01
This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, its conclusions are more general.
NASA Technical Reports Server (NTRS)
Baxa, Ernest G., Jr.; Lee, Jonggil
1991-01-01
The pulse pair method for spectrum parameter estimation is commonly used in pulse Doppler weather radar signal processing since it is economical to implement and can be shown to be a maximum likelihood estimator. With the use of airborne weather radar for windshear detection, the turbulent weather and strong ground clutter return spectrum differs from that assumed in its derivation, so the performance robustness of the pulse pair technique must be understood. Here, the effect of radar system pulse to pulse phase jitter and signal spectrum skew on the pulse pair algorithm performance is discussed. Phase jitter effect may be significant when the weather return signal to clutter ratio is very low and clutter rejection filtering is attempted. The analysis can be used to develop design specifications for airborne radar system phase stability. It is also shown that the weather return spectrum skew can cause a significant bias in the pulse pair mean windspeed estimates, and that the poly pulse pair algorithm can reduce this bias. It is suggested that use of a spectrum mode estimator may be more appropriate in characterizing the windspeed within a radar range resolution cell for detection of hazardous windspeed gradients.
Topological mappings of video and audio data.
Fyfe, Colin; Barbakh, Wesam; Ooi, Wei Chuan; Ko, Hanseok
2008-12-01
We review a new form of self-organizing map which is based on a nonlinear projection of latent points into data space, identical to that performed in the Generative Topographic Mapping (GTM).(1) But whereas the GTM is an extension of a mixture of experts, this model is an extension of a product of experts.(2) We show visualisation and clustering results on a data set composed of video data of lips uttering 5 Korean vowels. Finally we note that we may dispense with the probabilistic underpinnings of the product of experts and derive the same algorithm as a minimisation of mean squared error between the prototypes and the data. This leads us to suggest a new algorithm which incorporates local and global information in the clustering. Both ot the new algorithms achieve better results than the standard Self-Organizing Map.
Du, Jia; Younes, Laurent; Qiu, Anqi
2011-01-01
This paper introduces a novel large deformation diffeomorphic metric mapping algorithm for whole brain registration where sulcal and gyral curves, cortical surfaces, and intensity images are simultaneously carried from one subject to another through a flow of diffeomorphisms. To the best of our knowledge, this is the first time that the diffeomorphic metric from one brain to another is derived in a shape space of intensity images and point sets (such as curves and surfaces) in a unified manner. We describe the Euler–Lagrange equation associated with this algorithm with respect to momentum, a linear transformation of the velocity vector field of the diffeomorphic flow. The numerical implementation for solving this variational problem, which involves large-scale kernel convolution in an irregular grid, is made feasible by introducing a class of computationally friendly kernels. We apply this algorithm to align magnetic resonance brain data. Our whole brain mapping results show that our algorithm outperforms the image-based LDDMM algorithm in terms of the mapping accuracy of gyral/sulcal curves, sulcal regions, and cortical and subcortical segmentation. Moreover, our algorithm provides better whole brain alignment than combined volumetric and surface registration (Postelnicu et al., 2009) and hierarchical attribute matching mechanism for elastic registration (HAMMER) (Shen and Davatzikos, 2002) in terms of cortical and subcortical volume segmentation. PMID:21281722
NASA Astrophysics Data System (ADS)
Liu, Zhi; Zhou, Baotong; Zhang, Changnian
2017-03-01
Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.
Algebraic grid generation using tensor product B-splines. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Saunders, B. V.
1985-01-01
Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.
Algorithms and Complexity Results for Genome Mapping Problems.
Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric
2017-01-01
Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.
Fast object detection algorithm based on HOG and CNN
NASA Astrophysics Data System (ADS)
Lu, Tongwei; Wang, Dandan; Zhang, Yanduo
2018-04-01
In the field of computer vision, object classification and object detection are widely used in many fields. The traditional object detection have two main problems:one is that sliding window of the regional selection strategy is high time complexity and have window redundancy. And the other one is that Robustness of the feature is not well. In order to solve those problems, Regional Proposal Network (RPN) is used to select candidate regions instead of selective search algorithm. Compared with traditional algorithms and selective search algorithms, RPN has higher efficiency and accuracy. We combine HOG feature and convolution neural network (CNN) to extract features. And we use SVM to classify. For TorontoNet, our algorithm's mAP is 1.6 percentage points higher. For OxfordNet, our algorithm's mAP is 1.3 percentage higher.
2009-01-01
Background ESTs or variable sequence reads can be available in prokaryotic studies well before a complete genome is known. Use cases include (i) transcriptome studies or (ii) single cell sequencing of bacteria. Without suitable software their further analysis and mapping would have to await finalization of the corresponding genome. Results The tool JANE rapidly maps ESTs or variable sequence reads in prokaryotic sequencing and transcriptome efforts to related template genomes. It provides an easy-to-use graphics interface for information retrieval and a toolkit for EST or nucleotide sequence function prediction. Furthermore, we developed for rapid mapping an enhanced sequence alignment algorithm which reassembles and evaluates high scoring pairs provided from the BLAST algorithm. Rapid assembly on and replacement of the template genome by sequence reads or mapped ESTs is achieved. This is illustrated (i) by data from Staphylococci as well as from a Blattabacteria sequencing effort, (ii) mapping single cell sequencing reads is shown for poribacteria to sister phylum representative Rhodopirellula Baltica SH1. The algorithm has been implemented in a web-server accessible at http://jane.bioapps.biozentrum.uni-wuerzburg.de. Conclusion Rapid prokaryotic EST mapping or mapping of sequence reads is achieved applying JANE even without knowing the cognate genome sequence. PMID:19943962
NASA Astrophysics Data System (ADS)
Niazmardi, S.; Safari, A.; Homayouni, S.
2017-09-01
Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.
Assignment Of Finite Elements To Parallel Processors
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.
1990-01-01
Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
An Alternative Retrieval Algorithm for the Ozone Mapping and Profiler Suite Limb Profiler
2012-05-01
behavior of aerosol extinction from the upper troposphere through the stratosphere is critical for retrieving ozone in this region. Aerosol scattering is......include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT An Alternative Retrieval Algorithm for the Ozone Mapping and
Power Control and Optimization of Photovoltaic and Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Ghaffari, Azad
Power map and Maximum Power Point (MPP) of Photovoltaic (PV) and Wind Energy Conversion Systems (WECS) highly depend on system dynamics and environmental parameters, e.g., solar irradiance, temperature, and wind speed. Power optimization algorithms for PV systems and WECS are collectively known as Maximum Power Point Tracking (MPPT) algorithm. Gradient-based Extremum Seeking (ES), as a non-model-based MPPT algorithm, governs the system to its peak point on the steepest descent curve regardless of changes of the system dynamics and variations of the environmental parameters. Since the power map shape defines the gradient vector, then a close estimate of the power map shape is needed to create user assignable transients in the MPPT algorithm. The Hessian gives a precise estimate of the power map in a neighborhood around the MPP. The estimate of the inverse of the Hessian in combination with the estimate of the gradient vector are the key parts to implement the Newton-based ES algorithm. Hence, we generate an estimate of the Hessian using our proposed perturbation matrix. Also, we introduce a dynamic estimator to calculate the inverse of the Hessian which is an essential part of our algorithm. We present various simulations and experiments on the micro-converter PV systems to verify the validity of our proposed algorithm. The ES scheme can also be used in combination with other control algorithms to achieve desired closed-loop performance. The WECS dynamics is slow which causes even slower response time for the MPPT based on the ES. Hence, we present a control scheme, extended from Field-Oriented Control (FOC), in combination with feedback linearization to reduce the convergence time of the closed-loop system. Furthermore, the nonlinear control prevents magnetic saturation of the stator of the Induction Generator (IG). The proposed control algorithm in combination with the ES guarantees the closed-loop system robustness with respect to high level parameter uncertainty in the IG dynamics. The simulation results verify the effectiveness of the proposed algorithm.
Multifractal detrending moving-average cross-correlation analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework
Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.
Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.
Molecular surface mesh generation by filtering electron density map.
Giard, Joachim; Macq, Benoît
2010-01-01
Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.
Improving the interoperability of biomedical ontologies with compound alignments.
Oliveira, Daniela; Pesquita, Catia
2018-01-09
Ontologies are commonly used to annotate and help process life sciences data. Although their original goal is to facilitate integration and interoperability among heterogeneous data sources, when these sources are annotated with distinct ontologies, bridging this gap can be challenging. In the last decade, ontology matching systems have been evolving and are now capable of producing high-quality mappings for life sciences ontologies, usually limited to the equivalence between two ontologies. However, life sciences research is becoming increasingly transdisciplinary and integrative, fostering the need to develop matching strategies that are able to handle multiple ontologies and more complex relations between their concepts. We have developed ontology matching algorithms that are able to find compound mappings between multiple biomedical ontologies, in the form of ternary mappings, finding for instance that "aortic valve stenosis"(HP:0001650) is equivalent to the intersection between "aortic valve"(FMA:7236) and "constricted" (PATO:0001847). The algorithms take advantage of search space filtering based on partial mappings between ontology pairs, to be able to handle the increased computational demands. The evaluation of the algorithms has shown that they are able to produce meaningful results, with precision in the range of 60-92% for new mappings. The algorithms were also applied to the potential extension of logical definitions of the OBO and the matching of several plant-related ontologies. This work is a first step towards finding more complex relations between multiple ontologies. The evaluation shows that the results produced are significant and that the algorithms could satisfy specific integration needs.
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-03-24
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.
Automatic detection of artifacts in converted S3D video
NASA Astrophysics Data System (ADS)
Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail
2014-03-01
In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.
AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation
Yuan, Xin; Martínez-Ortega, José-Fernán; Fernández, José Antonio Sánchez; Eckert, Martina
2017-01-01
In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM) applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF) are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure. PMID:28531135
Towards mapping of rock walls using a UAV-mounted 2D laser scanner in GPS denied environments
NASA Astrophysics Data System (ADS)
Turner, Glen
In geotechnical engineering, the stability of rock excavations and walls is estimated by using tools that include a map of the orientations of exposed rock faces. However, measuring these orientations by using conventional methods can be time consuming, sometimes dangerous, and is limited to regions of the exposed rock that are reachable by a human. This thesis introduces a 2D, simulated, quadcopter-based rock wall mapping algorithm for GPS denied environments such as underground mines or near high walls on surface. The proposed algorithm employs techniques from the field of robotics known as simultaneous localization and mapping (SLAM) and is a step towards 3D rock wall mapping. Not only are quadcopters agile, but they can hover. This is very useful for confined spaces such as underground or near rock walls. The quadcopter requires sensors to enable self localization and mapping in dark, confined and GPS denied environments. However, these sensors are limited by the quadcopter payload and power restrictions. Because of these restrictions, a light weight 2D laser scanner is proposed. As a first step towards a 3D mapping algorithm, this thesis proposes a simplified scenario in which a simulated 1D laser range finder and 2D IMU are mounted on a quadcopter that is moving on a plane. Because the 1D laser does not provide enough information to map the 2D world from a single measurement, many measurements are combined over the trajectory of the quadcopter. Least Squares Optimization (LSO) is used to optimize the estimated trajectory and rock face for all data collected over the length of a light. Simulation results show that the mapping algorithm developed is a good first step. It shows that by combining measurements over a trajectory, the scanned rock face can be estimated using a lower-dimensional range sensor. A swathing manoeuvre is introduced as a way to promote loop closures within a short time period, thus reducing accumulated error. Some suggestions on how to improve the algorithm are also provided.
Xiao, Chuan-Le; Mai, Zhi-Biao; Lian, Xin-Lei; Zhong, Jia-Yong; Jin, Jing-Jie; He, Qing-Yu; Zhang, Gong
2014-01-01
Correct and bias-free interpretation of the deep sequencing data is inevitably dependent on the complete mapping of all mappable reads to the reference sequence, especially for quantitative RNA-seq applications. Seed-based algorithms are generally slow but robust, while Burrows-Wheeler Transform (BWT) based algorithms are fast but less robust. To have both advantages, we developed an algorithm FANSe2 with iterative mapping strategy based on the statistics of real-world sequencing error distribution to substantially accelerate the mapping without compromising the accuracy. Its sensitivity and accuracy are higher than the BWT-based algorithms in the tests using both prokaryotic and eukaryotic sequencing datasets. The gene identification results of FANSe2 is experimentally validated, while the previous algorithms have false positives and false negatives. FANSe2 showed remarkably better consistency to the microarray than most other algorithms in terms of gene expression quantifications. We implemented a scalable and almost maintenance-free parallelization method that can utilize the computational power of multiple office computers, a novel feature not present in any other mainstream algorithm. With three normal office computers, we demonstrated that FANSe2 mapped an RNA-seq dataset generated from an entire Illunima HiSeq 2000 flowcell (8 lanes, 608 M reads) to masked human genome within 4.1 hours with higher sensitivity than Bowtie/Bowtie2. FANSe2 thus provides robust accuracy, full indel sensitivity, fast speed, versatile compatibility and economical computational utilization, making it a useful and practical tool for deep sequencing applications. FANSe2 is freely available at http://bioinformatics.jnu.edu.cn/software/fanse2/.
Clustering of color map pixels: an interactive approach
NASA Astrophysics Data System (ADS)
Moon, Yiu Sang; Luk, Franklin T.; Yuen, K. N.; Yeung, Hoi Wo
2003-12-01
The demand for digital maps continues to arise as mobile electronic devices become more popular nowadays. Instead of creating the entire map from void, we may convert a scanned paper map into a digital one. Color clustering is the very first step of the conversion process. Currently, most of the existing clustering algorithms are fully automatic. They are fast and efficient but may not work well in map conversion because of the numerous ambiguous issues associated with printed maps. Here we introduce two interactive approaches for color clustering on the map: color clustering with pre-calculated index colors (PCIC) and color clustering with pre-calculated color ranges (PCCR). We also introduce a memory model that could enhance and integrate different image processing techniques for fine-tuning the clustering results. Problems and examples of the algorithms are discussed in the paper.
Segmentation algorithm on smartphone dual camera: application to plant organs in the wild
NASA Astrophysics Data System (ADS)
Bertrand, Sarah; Cerutti, Guillaume; Tougne, Laure
2018-04-01
In order to identify the species of a tree, the different organs that are the leaves, the bark, the flowers and the fruits, are inspected by botanists. So as to develop an algorithm that identifies automatically the species, we need to extract these objects of interest from their complex natural environment. In this article, we focus on the segmentation of flowers and fruits and we present a new method of segmentation based on an active contour algorithm using two probability maps. The first map is constructed via the dual camera that we can find on the back of the latest smartphones. The second map is made with the help of a multilayer perceptron (MLP). The combination of these two maps to drive the evolution of the object contour allows an efficient segmentation of the organ from a natural background.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Whole Sky Imager Characterization of Sky Obscuration by Clouds for the Starfire Optical Range
2010-01-11
9.3 Further Algorithm Development and Evaluation 58 9.4 Analysis of the Data Base 58 10.0 DISCUSSION OF CONTRACT REQUIREMENTS 59 10.1...clouds, Site 5 Feb 14 2008 0900 28 21 Transmittance map, Moonlight , clear sky, Site 5 Feb 3 2008 0700 28 22 Transmittance map, Moonlight , thin...clouds, Site 5 Feb 8 2008 1200 29 23 Transmittance map, Moonlight , broken clouds, Site 5 Feb 2 2008 0800 29 24 Cloud Algorithm Results, Moonlight
Optimized MLAA for quantitative non-TOF PET/MR of the brain
NASA Astrophysics Data System (ADS)
Benoit, Didier; Ladefoged, Claes N.; Rezaei, Ahmadreza; Keller, Sune H.; Andersen, Flemming L.; Højgaard, Liselotte; Hansen, Adam E.; Holm, Søren; Nuyts, Johan
2016-12-01
For quantitative tracer distribution in positron emission tomography, attenuation correction is essential. In a hybrid PET/CT system the CT images serve as a basis for generation of the attenuation map, but in PET/MR, the MR images do not have a similarly simple relationship with the attenuation map. Hence attenuation correction in PET/MR systems is more challenging. Typically either of two MR sequences are used: the Dixon or the ultra-short time echo (UTE) techniques. However these sequences have some well-known limitations. In this study, a reconstruction technique based on a modified and optimized non-TOF MLAA is proposed for PET/MR brain imaging. The idea is to tune the parameters of the MLTR applying some information from an attenuation image computed from the UTE sequences and a T1w MR image. In this MLTR algorithm, an {αj} parameter is introduced and optimized in order to drive the algorithm to a final attenuation map most consistent with the emission data. Because the non-TOF MLAA is used, a technique to reduce the cross-talk effect is proposed. In this study, the proposed algorithm is compared to the common reconstruction methods such as OSEM using a CT attenuation map, considered as the reference, and OSEM using the Dixon and UTE attenuation maps. To show the robustness and the reproducibility of the proposed algorithm, a set of 204 [18F]FDG patients, 35 [11C]PiB patients and 1 [18F]FET patient are used. The results show that by choosing an optimized value of {αj} in MLTR, the proposed algorithm improves the results compared to the standard MR-based attenuation correction methods (i.e. OSEM using the Dixon or the UTE attenuation maps), and the cross-talk and the scale problem are limited.
NASA Astrophysics Data System (ADS)
Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar
2016-06-01
There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.
Satellite Snow-Cover Mapping: A Brief Review
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.
1995-01-01
Satellite snow mapping has been accomplished since 1966, initially using data from the reflective part of the electromagnetic spectrum, and now also employing data from the microwave part of the spectrum. Visible and near-infrared sensors can provide excellent spatial resolution from space enabling detailed snow mapping. When digital elevation models are also used, snow mapping can provide realistic measurements of snow extent even in mountainous areas. Passive-microwave satellite data permit global snow cover to be mapped on a near-daily basis and estimates of snow depth to be made, but with relatively poor spatial resolution (approximately 25 km). Dense forest cover limits both techniques and optical remote sensing is limited further by cloudcover conditions. Satellite remote sensing of snow cover with imaging radars is still in the early stages of research, but shows promise at least for mapping wet or melting snow using C-band (5.3 GHz) synthetic aperture radar (SAR) data. Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) data beginning with the launch of the first EOS platform in 1998. Digital maps will be produced that will provide daily, and maximum weekly global snow, sea ice and lake ice cover at 1-km spatial resolution. Statistics will be generated on the extent and persistence of snow or ice cover in each pixel for each weekly map, cloudcover permitting. It will also be possible to generate snow- and ice-cover maps using MODIS data at 250- and 500-m resolution, and to study and map snow and ice characteristics such as albedo. been under development. Passive-microwave data offer the potential for determining not only snow cover, but snow water equivalent, depth and wetness under all sky conditions. A number of algorithms have been developed to utilize passive-microwave brightness temperatures to provide information on snow cover and water equivalent. The variability of vegetative Algorithms are being developed to map global snow and ice cover using Earth Algorithms to map global snow cover using passive-microwave data have also cover and of snow grain size, globally, limits the utility of a single algorithm to map global snow cover.
Oyana, Tonny J; Achenie, Luke E K; Heo, Joon
2012-01-01
The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM.
Oyana, Tonny J.; Achenie, Luke E. K.; Heo, Joon
2012-01-01
The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM. PMID:22481977
NASA Astrophysics Data System (ADS)
Qin, Yuanwei; Xiao, Xiangming; Dong, Jinwei; Zhou, Yuting; Zhu, Zhe; Zhang, Geli; Du, Guoming; Jin, Cui; Kou, Weili; Wang, Jie; Li, Xiangping
2015-07-01
Accurate and timely rice paddy field maps with a fine spatial resolution would greatly improve our understanding of the effects of paddy rice agriculture on greenhouse gases emissions, food and water security, and human health. Rice paddy field maps were developed using optical images with high temporal resolution and coarse spatial resolution (e.g., Moderate Resolution Imaging Spectroradiometer (MODIS)) or low temporal resolution and high spatial resolution (e.g., Landsat TM/ETM+). In the past, the accuracy and efficiency for rice paddy field mapping at fine spatial resolutions were limited by the poor data availability and image-based algorithms. In this paper, time series MODIS and Landsat ETM+/OLI images, and the pixel- and phenology-based algorithm are used to map paddy rice planting area. The unique physical features of rice paddy fields during the flooding/open-canopy period are captured with the dynamics of vegetation indices, which are then used to identify rice paddy fields. The algorithm is tested in the Sanjiang Plain (path/row 114/27) in China in 2013. The overall accuracy of the resulted map of paddy rice planting area generated by both Landsat ETM+ and OLI is 97.3%, when evaluated with areas of interest (AOIs) derived from geo-referenced field photos. The paddy rice planting area map also agrees reasonably well with the official statistics at the level of state farms (R2 = 0.94). These results demonstrate that the combination of fine spatial resolution images and the phenology-based algorithm can provide a simple, robust, and automated approach to map the distribution of paddy rice agriculture in a year.
Qin, Yuanwei; Xiao, Xiangming; Dong, Jinwei; Zhou, Yuting; Zhu, Zhe; Zhang, Geli; Du, Guoming; Jin, Cui; Kou, Weili; Wang, Jie; Li, Xiangping
2015-07-01
Accurate and timely rice paddy field maps with a fine spatial resolution would greatly improve our understanding of the effects of paddy rice agriculture on greenhouse gases emissions, food and water security, and human health. Rice paddy field maps were developed using optical images with high temporal resolution and coarse spatial resolution (e.g., Moderate Resolution Imaging Spectroradiometer (MODIS)) or low temporal resolution and high spatial resolution (e.g., Landsat TM/ETM+). In the past, the accuracy and efficiency for rice paddy field mapping at fine spatial resolutions were limited by the poor data availability and image-based algorithms. In this paper, time series MODIS and Landsat ETM+/OLI images, and the pixel- and phenology-based algorithm are used to map paddy rice planting area. The unique physical features of rice paddy fields during the flooding/open-canopy period are captured with the dynamics of vegetation indices, which are then used to identify rice paddy fields. The algorithm is tested in the Sanjiang Plain (path/row 114/27) in China in 2013. The overall accuracy of the resulted map of paddy rice planting area generated by both Landsat ETM+ and OLI is 97.3%, when evaluated with areas of interest (AOIs) derived from geo-referenced field photos. The paddy rice planting area map also agrees reasonably well with the official statistics at the level of state farms ( R 2 = 0.94). These results demonstrate that the combination of fine spatial resolution images and the phenology-based algorithm can provide a simple, robust, and automated approach to map the distribution of paddy rice agriculture in a year.
Dakin, Helen; Abel, Lucy; Burns, Richéal; Yang, Yaling
2018-02-12
The Health Economics Research Centre (HERC) Database of Mapping Studies was established in 2013, based on a systematic review of studies developing mapping algorithms predicting EQ-5D. The Mapping onto Preference-based measures reporting Standards (MAPS) statement was published in 2015 to improve reporting of mapping studies. We aimed to update the systematic review and assess the extent to which recently-published studies mapping condition-specific quality of life or clinical measures to the EQ-5D follow the guidelines published in the MAPS Reporting Statement. A published systematic review was updated using the original inclusion criteria to include studies published by December 2016. We included studies reporting novel algorithms mapping from any clinical measure or patient-reported quality of life measure to either the EQ-5D-3L or EQ-5D-5L. Titles and abstracts of all identified studies and the full text of papers published in 2016 were assessed against the MAPS checklist. The systematic review identified 144 mapping studies reporting 190 algorithms mapping from 110 different source instruments to EQ-5D. Of the 17 studies published in 2016, nine (53%) had titles that followed the MAPS statement guidance, although only two (12%) had abstracts that fully addressed all MAPS items. When the full text of these papers was assessed against the complete MAPS checklist, only two studies (12%) were found to fulfil or partly fulfil all criteria. Of the 141 papers (across all years) that included abstracts, the items on the MAPS statement checklist that were fulfilled by the largest number of studies comprised having a structured abstract (95%) and describing target instruments (91%) and source instruments (88%). The number of published mapping studies continues to increase. Our updated database provides a convenient way to identify mapping studies for use in cost-utility analysis. Most recent studies do not fully address all items on the MAPS checklist.
Special Issue on a Fault Tolerant Network on Chip Architecture
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan
2010-06-01
In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.
A subharmonic dynamical bifurcation during in vitro epileptiform activity
NASA Astrophysics Data System (ADS)
Perez Velazquez, Jose L.; Khosravani, Houman
2004-06-01
Epileptic seizures are considered to result from a sudden change in the synchronization of firing neurons in brain neural networks. We have used an in vitro model of status epilepticus (SE) to characterize dynamical regimes underlying the observed seizure-like activity. Time intervals between spikes or bursts were used as the variable to construct first-return interpeak or interburst interval plots, for studying neuronal population activity during the transition to seizure, as well as within seizures. Return maps constructed for a brief epoch before seizures were used for approximating the local system dynamics during that time window. Analysis of the first-return maps suggests that intermittency is a dynamical regime underlying the observed epileptic activity. This type of analysis may be useful for understanding the collective dynamics of neuronal populations in the normal and pathological brain.
Spectral Mapping at Asteroid 101955 Bennu
NASA Astrophysics Data System (ADS)
Clark, Beth Ellen; Hamilton, Victoria E.; Emery, Joshua P.; Hawley, C. Luke; Howell, Ellen S.; Lauretta, Dante; Simon, Amy A.; Christensen, Philip R.; Reuter, Dennis
2017-10-01
The OSIRIS-REx Asteroid Sample Return mission was launched in September 2016. The main science surveys of asteroid 101955 Bennu start in March 2019. Science instruments include a Visible-InfraRed Spectrometer (OVIRS) and a Thermal Emission Spectrometer (OTES) that will produce observations that will be co-registered to the tessellated shape model of Bennu (the fundamental unit of which is a triangular facet). One task of the science team is to synthesize the results in real time during proximity operations to contribute to selection of the sampling site. Hence, we will be focused on quickly producing spectral maps for: (1) mineral abundances; (2) band strengths of minerals and chemicals (including a search for the subtle ~5% absorption feature produced by organics in meteorites); and (3) temperature and thermal inertia values. In sum, we will be producing on the order of ~60 spectral maps of Bennu’s surface composition and thermophysical properties. Due to overlapping surface spots, simulations of our spectral maps show there may be an opportunity to perform spectral super-resolution. We have a large parameter space of choices available in creating spectral maps of Bennu, including: (a) mean facet size (shape model resolution), (b) percentage of overlap between subsequent spot measurements, (c) the number of spectral spots measured per facet, and (d) the mathematical algorithm used to combine the overlapping spots (or bin them on a per-facet basis). Projection effects -- caused by irregular sampling of an irregularly shaped object with circular spectrometer fields-of-view and then mapping these circles onto triangular facets -- can be intense. To prepare for prox ops, we are simulating multiple mineralogical “truth worlds” of Bennu to study the projection effects that result from our planned methods of spectral mapping. This presentation addresses: Can we combine the three planned global surveys of the asteroid (to be obtained at different phase angles) to create a spectral map with higher spatial resolution than the native spectrometer field-of-view in order to increase our confidence in detection of a spatially small occurrence of organics on Bennu?
Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Cox, Cary M.
This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work also explores the concept of an edge within hyperspectral space, the relative importance of spatial and spectral resolutions as they pertain to HSI edge detection and how effectively compressed HSI data improves edge detection results. The HSI edge detection experiments yielded valuable insights into the algorithms' strengths, weaknesses and optimal alignment to remote sensing applications. The gradient-based edge operator produced strong edge planes across a range of evaluation measures and applications, particularly with respect to false negatives, unbroken edges, urban mapping, vegetation mapping and oil spill mapping applications. False positives and uncompressed HSI data presented occasional challenges to the algorithm. The HySPADE edge operator produced satisfactory results with respect to localization, single-point response, oil spill mapping and trace chemical detection, and was challenged by false positives, declining spectral resolution and vegetation mapping applications. The level set edge detector produced high-quality edge planes for most tests and demonstrated strong performance with respect to false positives, single-point response, oil spill mapping and mineral mapping. False negatives were a regular challenge for the level set edge detection algorithm. Finally, HSI data optimized for spectral information compression and noise was shown to improve edge detection performance across all three algorithms, while the gradient-based algorithm and HySPADE demonstrated significant robustness to declining spectral and spatial resolutions.
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.
Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews
2015-03-01
This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.
38 CFR 21.9680 - Certifications and release of payments.
Code of Federal Regulations, 2013 CFR
2013-07-01
... tax return; or (3) The most recent State income tax return; or (4) Rental/lease agreement; or (5) Mortgage document; or (6) Current real property assessment; or (7) Voter registration card. (B) An... internet search engine for mapping upon entering the individual's resident address provided in paragraph (c...
38 CFR 21.9680 - Certifications and release of payments.
Code of Federal Regulations, 2012 CFR
2012-07-01
... tax return; or (3) The most recent State income tax return; or (4) Rental/lease agreement; or (5) Mortgage document; or (6) Current real property assessment; or (7) Voter registration card. (B) An... internet search engine for mapping upon entering the individual's resident address provided in paragraph (c...
38 CFR 21.9680 - Certifications and release of payments.
Code of Federal Regulations, 2014 CFR
2014-07-01
... tax return; or (3) The most recent State income tax return; or (4) Rental/lease agreement; or (5) Mortgage document; or (6) Current real property assessment; or (7) Voter registration card. (B) An... internet search engine for mapping upon entering the individual's resident address provided in paragraph (c...
Farace, Paolo; Righetto, Roberto; Deffet, Sylvain; Meijers, Arturs; Vander Stappen, Francois
2016-12-01
To introduce a fast ray-tracing algorithm in pencil proton radiography (PR) with a multilayer ionization chamber (MLIC) for in vivo range error mapping. Pencil beam PR was obtained by delivering spots uniformly positioned in a square (45 × 45 mm 2 field-of-view) of 9 × 9 spots capable of crossing the phantoms (210 MeV). The exit beam was collected by a MLIC to sample the integral depth dose (IDD MLIC ). PRs of an electron-density and of a head phantom were acquired by moving the couch to obtain multiple 45 × 45 mm 2 frames. To map the corresponding range errors, the two-dimensional set of IDD MLIC was compared with (i) the integral depth dose computed by the treatment planning system (TPS) by both analytic (IDD TPS ) and Monte Carlo (IDD MC ) algorithms in a volume of water simulating the MLIC at the CT, and (ii) the integral depth dose directly computed by a simple ray-tracing algorithm (IDD direct ) through the same CT data. The exact spatial position of the spot pattern was numerically adjusted testing different in-plane positions and selecting the one that minimized the range differences between IDD direct and IDD MLIC . Range error mapping was feasible by both the TPS and the ray-tracing methods, but very sensitive to even small misalignments. In homogeneous regions, the range errors computed by the direct ray-tracing algorithm matched the results obtained by both the analytic and the Monte Carlo algorithms. In both phantoms, lateral heterogeneities were better modeled by the ray-tracing and the Monte Carlo algorithms than by the analytic TPS computation. Accordingly, when the pencil beam crossed lateral heterogeneities, the range errors mapped by the direct algorithm matched better the Monte Carlo maps than those obtained by the analytic algorithm. Finally, the simplicity of the ray-tracing algorithm allowed to implement a prototype procedure for automated spatial alignment. The ray-tracing algorithm can reliably replace the TPS method in MLIC PR for in vivo range verification and it can be a key component to develop software tools for spatial alignment and correction of CT calibration.
Do Doppler color flow algorithms for mapping disturbed flow make sense?
Gardin, J M; Lobodzinski, S M
1990-01-01
It has been suggested that a major advantage of Doppler color flow mapping is its ability to visualize areas of disturbed ("turbulent") flow, for example, in valvular stenosis or regurgitation and in shunts. To investigate how various color flow mapping instruments display disturbed flow information, color image processing was used to evaluate the most common velocity-variance color encoding algorithms of seven commercially available ultrasound machines. In six of seven machines, green was reportedly added by the variance display algorithms to map areas of disturbed flow. The amount of green intensity added to each pixel along the red and blue portions of the velocity reference color bar was calculated for each machine. In this study, velocities displayed on the reference color bar ranged from +/- 46 to +/- 64 cm/sec, depending on the Nyquist limit. Of note, changing the Nyquist limits depicted on the color reference bars did not change the distribution of the intensities of red, blue, or green within the contour of the reference map, but merely assigned different velocities to the pixels. Most color flow mapping algorithms in our study added increasing intensities of green to increasing positive (red) or negative (blue) velocities along their color reference bars. Most of these machines also added increasing green to red and blue color intensities horizontally across their reference bars as a marker of increased variance (spectral broadening). However, at any given velocity, marked variations were noted between different color flow mapping instruments in the amount of green added to their color velocity reference bars.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Ray, S. E.; Pagano, T. S.; Fetzer, E. J.; Lambrigtsen, B.; Olsen, E. T.; Teixeira, J.; Licata, S. J.; Hall, J. R.; Thompson, C. K.
2015-12-01
The Atmospheric Infrared Sounder (AIRS) on NASA's Aqua spacecraft has been returning daily global observations of Earth's atmospheric constituents and properties since 2002. With a 12-year data record and daily, global observations in near real-time, AIRS data can play a role in applications that fall under many of the NASA Applied Sciences focus areas. For vector-borne disease, research is underway using AIRS near surface retrievals to assess outbreak risk, mosquito incubation periods and epidemic potential for dengue fever, malaria, and West Nile virus. For drought applications, AIRS temperature and humidity data are being used in the development of new drought indicators and improvement in the understanding of drought development. For volcanic hazards, new algorithms using AIRS data are in development to improve the reporting of sulfur dioxide concentration, the burden and height of volcanic ash and dust, all of which pose a safety threat to aircraft. In addition, anomaly maps of many of AIRS standard products are being produced to help highlight "hot spots" and illustrate trends. To distribute it's applications imagery, AIRS is leveraging existing NASA data frameworks and organizations to facilitate archiving, distribution and participation in the BEDI. This poster will communicate the status of the applications effort for the AIRS Project and provide examples of new maps designed to best communicate the AIRS data.
Chaotic map clustering algorithm for EEG analysis
NASA Astrophysics Data System (ADS)
Bellotti, R.; De Carlo, F.; Stramaglia, S.
2004-03-01
The non-parametric chaotic map clustering algorithm has been applied to the analysis of electroencephalographic signals, in order to recognize the Huntington's disease, one of the most dangerous pathologies of the central nervous system. The performance of the method has been compared with those obtained through parametric algorithms, as K-means and deterministic annealing, and supervised multi-layer perceptron. While supervised neural networks need a training phase, performed by means of data tagged by the genetic test, and the parametric methods require a prior choice of the number of classes to find, the chaotic map clustering gives a natural evidence of the pathological class, without any training or supervision, thus providing a new efficient methodology for the recognition of patterns affected by the Huntington's disease.
Beyond the usual mapping functions in GPS, VLBI and Deep Space tracking.
NASA Astrophysics Data System (ADS)
Barriot, Jean-Pierre; Serafini, Jonathan; Sichoix, Lydie
2014-05-01
We describe here a new algorithm to model the water contents of the atmosphere (including ZWD) from GPS slant wet delays relative to a single receiver. We first make the assumption that the water vapor contents are mainly governed by a scale height (exponential law), and secondly that the departures from this decaying exponential can be mapped as a set of low degree 3D Zernike functions (w.r.t. space) and Tchebyshev polynomials (w.r.t. time.) We compare this new algorithm with previous algorithms known as mapping functions in GPS, VLBI and Deep Space tracking and give an example with data acquired over a one day time span at the Geodesy Observatory of Tahiti.
Surface registration technique for close-range mapping applications
NASA Astrophysics Data System (ADS)
Habib, Ayman F.; Cheng, Rita W. T.
2006-08-01
Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Risk-Informed Mean Recurrence Intervals for Updated Wind Maps in ASCE 7-16.
McAllister, Therese P; Wang, Naiyu; Ellingwood, Bruce R
2018-05-01
ASCE 7 is moving toward adopting load requirements that are consistent with risk-informed design goals characteristic of performance-based engineering (PBE). ASCE 7-10 provided wind maps that correspond to return periods of 300, 700, and 1,700 years for Risk Categories I, II, and combined III/IV, respectively. The risk targets for Risk Categories III and IV buildings and other structures (designated as essential facilities) are different in PBE. The reliability analyses reported in this paper were conducted using updated wind load data to (1) confirm that the return periods already in ASCE 7-10 were also appropriate for risk-informed PBE, and (2) to determine a new risk-based return period for Risk Category IV. The use of data for wind directionality factor, K d , which has become available from recent wind tunnel tests, revealed that reliabilities associated with wind load combinations for Risk Category II structures are, in fact, consistent with the reliabilities associated with the ASCE 7 gravity load combinations. This paper shows that the new wind maps in ASCE 7-16, which are based on return periods of 300, 700, 1,700, and 3,000 years for Risk Categories I, II, III, and IV, respectively), achieve the reliability targets in Section 1.3.1.3 of ASCE 7-16 for nonhurricane wind loads.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
NASA Astrophysics Data System (ADS)
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.
Highly efficient computer algorithm for identifying layer thickness of atomically thin 2D materials
NASA Astrophysics Data System (ADS)
Lee, Jekwan; Cho, Seungwan; Park, Soohyun; Bae, Hyemin; Noh, Minji; Kim, Beom; In, Chihun; Yang, Seunghoon; Lee, Sooun; Seo, Seung Young; Kim, Jehyun; Lee, Chul-Ho; Shim, Woo-Young; Jo, Moon-Ho; Kim, Dohun; Choi, Hyunyong
2018-03-01
The fields of layered material research, such as transition-metal dichalcogenides (TMDs), have demonstrated that the optical, electrical and mechanical properties strongly depend on the layer number N. Thus, efficient and accurate determination of N is the most crucial step before the associated device fabrication. An existing experimental technique using an optical microscope is the most widely used one to identify N. However, a critical drawback of this approach is that it relies on extensive laboratory experiences to estimate N; it requires a very time-consuming image-searching task assisted by human eyes and secondary measurements such as atomic force microscopy and Raman spectroscopy, which are necessary to ensure N. In this work, we introduce a computer algorithm based on the image analysis of a quantized optical contrast. We show that our algorithm can apply to a wide variety of layered materials, including graphene, MoS2, and WS2 regardless of substrates. The algorithm largely consists of two parts. First, it sets up an appropriate boundary between target flakes and substrate. Second, to compute N, it automatically calculates the optical contrast using an adaptive RGB estimation process between each target, which results in a matrix with different integer Ns and returns a matrix map of Ns onto the target flake position. Using a conventional desktop computational power, the time taken to display the final N matrix was 1.8 s on average for the image size of 1280 pixels by 960 pixels and obtained a high accuracy of 90% (six estimation errors among 62 samples) when compared to the other methods. To show the effectiveness of our algorithm, we also apply it to TMD flakes transferred on optically transparent c-axis sapphire substrates and obtain a similar result of the accuracy of 94% (two estimation errors among 34 samples).
Improving clustering with metabolic pathway data.
Milone, Diego H; Stegmayer, Georgina; López, Mariana; Kamenetzky, Laura; Carrari, Fernando
2014-04-10
It is a common practice in bioinformatics to validate each group returned by a clustering algorithm through manual analysis, according to a-priori biological knowledge. This procedure helps finding functionally related patterns to propose hypotheses for their behavior and the biological processes involved. Therefore, this knowledge is used only as a second step, after data are just clustered according to their expression patterns. Thus, it could be very useful to be able to improve the clustering of biological data by incorporating prior knowledge into the cluster formation itself, in order to enhance the biological value of the clusters. A novel training algorithm for clustering is presented, which evaluates the biological internal connections of the data points while the clusters are being formed. Within this training algorithm, the calculation of distances among data points and neurons centroids includes a new term based on information from well-known metabolic pathways. The standard self-organizing map (SOM) training versus the biologically-inspired SOM (bSOM) training were tested with two real data sets of transcripts and metabolites from Solanum lycopersicum and Arabidopsis thaliana species. Classical data mining validation measures were used to evaluate the clustering solutions obtained by both algorithms. Moreover, a new measure that takes into account the biological connectivity of the clusters was applied. The results of bSOM show important improvements in the convergence and performance for the proposed clustering method in comparison to standard SOM training, in particular, from the application point of view. Analyses of the clusters obtained with bSOM indicate that including biological information during training can certainly increase the biological value of the clusters found with the proposed method. It is worth to highlight that this fact has effectively improved the results, which can simplify their further analysis.The algorithm is available as a web-demo at http://fich.unl.edu.ar/sinc/web-demo/bsom-lite/. The source code and the data sets supporting the results of this article are available at http://sourceforge.net/projects/sourcesinc/files/bsom.
Machine Learning Applied to Dawn/VIR data of Vesta in view of MERTIS/BepiColombo.
NASA Astrophysics Data System (ADS)
Helbert, J.; D'Amore, M.; Le Scaon, R.; Maturilli, A.; Palomba, E.; Longobardo, A.; Hiesinger, H.
2016-12-01
Remote sensing spectroscopy is one of the most commonly used technique in planetary science and for recent instruments producing huge amount of data, classic methods could fails to unlock the full scientific potential buried in the measurements. We explored several Machine Learning techniques: multi-step clustering method is developed, using an image segmentation method, a stream algorithm, and hierarchical clustering. The MErcury Radiometer and Thermal infrared Imaging Spectrometer (MERTIS) is part of the payload of the Mercury Planetary Orbiter spacecraft of the ESA-JAXA BepiColombo mission. MERTIS's scientific goals are to infer rock-forming minerals, to map surface composition, and to study surface temperature variations on Mercury. The NASA mission DAWN carry a suites of instruments aimed at understanding the two most massive objects in the main asteroid belt: Vesta and Ceres. DAWN has already successfully completed the exploration of Vesta in September 2012 and it is now in the last phase of the mission around Ceres. To cope with the stream of data that will be delivered by MERTIS, we developed an algorithm that could aggregate new data as they come in during the mission giving the scientist a guide for the most interesting and new discovery on Mercury. The DAWN/VESTA VIR data is a testbed for the algorithm. The algorithm identified the Olivine outcrops around two craters on Vesta's surface described in Ammannito et al., 2013. We furthermore mimic the data acquisition process as if the mission were dumping the data live. The algorithm provides insightful information on the novelty and classes int he data as they are collected. This will enhance MERTIS targeting and maximize its scientific return during BepiColombo mission at Mercury. E Ammannito et al. "Olivine in an unexpected location on Vesta/'s surface". In: Nature 504.7478 (2013), pp. 122-125.
Public-key encryption with chaos.
Kocarev, Ljupco; Sterjev, Marjan; Fekete, Attila; Vattay, Gabor
2004-12-01
We propose public-key encryption algorithms based on chaotic maps, which are generalization of well-known and commercially used algorithms: Rivest-Shamir-Adleman (RSA), ElGamal, and Rabin. For the case of generalized RSA algorithm we discuss in detail its software implementation and properties. We show that our algorithm is as secure as RSA algorithm.
Public-key encryption with chaos
NASA Astrophysics Data System (ADS)
Kocarev, Ljupco; Sterjev, Marjan; Fekete, Attila; Vattay, Gabor
2004-12-01
We propose public-key encryption algorithms based on chaotic maps, which are generalization of well-known and commercially used algorithms: Rivest-Shamir-Adleman (RSA), ElGamal, and Rabin. For the case of generalized RSA algorithm we discuss in detail its software implementation and properties. We show that our algorithm is as secure as RSA algorithm.
McMahon, Christopher J.; Toomey, Joshua P.
2017-01-01
Background We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. Methods In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the ‘minimum gradient detection algorithm’. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Findings Applying the new ‘minimum gradient detection algorithm’ CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. Conclusions More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements. PMID:28837602
Retinal vessel segmentation on SLO image
Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Schuman, Joel S.
2010-01-01
A scanning laser ophthalmoscopy (SLO) image, taken from optical coherence tomography (OCT), usually has lower global/local contrast and more noise compared to the traditional retinal photograph, which makes the vessel segmentation challenging work. A hybrid algorithm is proposed to efficiently solve these problems by fusing several designed methods, taking the advantages of each method and reducing the error measurements. The algorithm has several steps consisting of image preprocessing, thresholding probe and weighted fusing. Four different methods are first designed to transform the SLO image into feature response images by taking different combinations of matched filter, contrast enhancement and mathematical morphology operators. A thresholding probe algorithm is then applied on those response images to obtain four vessel maps. Weighted majority opinion is used to fuse these vessel maps and generate a final vessel map. The experimental results showed that the proposed hybrid algorithm could successfully segment the blood vessels on SLO images, by detecting the major and small vessels and suppressing the noises. The algorithm showed substantial potential in various clinical applications. The use of this method can be also extended to medical image registration based on blood vessel location. PMID:19163149
Autonomous Wheeled Robot Platform Testbed for Navigation and Mapping Using Low-Cost Sensors
NASA Astrophysics Data System (ADS)
Calero, D.; Fernandez, E.; Parés, M. E.
2017-11-01
This paper presents the concept of an architecture for a wheeled robot system that helps researchers in the field of geomatics to speed up their daily research on kinematic geodesy, indoor navigation and indoor positioning fields. The presented ideas corresponds to an extensible and modular hardware and software system aimed at the development of new low-cost mapping algorithms as well as at the evaluation of the performance of sensors. The concept, already implemented in the CTTC's system ARAS (Autonomous Rover for Automatic Surveying) is generic and extensible. This means that it is possible to incorporate new navigation algorithms or sensors at no maintenance cost. Only the effort related to the development tasks required to either create such algorithms needs to be taken into account. As a consequence, change poses a much small problem for research activities in this specific area. This system includes several standalone sensors that may be combined in different ways to accomplish several goals; that is, this system may be used to perform a variety of tasks, as, for instance evaluates positioning algorithms performance or mapping algorithms performance.
Combined distributed and concentrated transducer network for failure indication
NASA Astrophysics Data System (ADS)
Ostachowicz, Wieslaw; Wandowski, Tomasz; Malinowski, Pawel
2010-03-01
In this paper algorithm for discontinuities localisation in thin panels made of aluminium alloy is presented. Mentioned algorithm uses Lamb wave propagation methods for discontinuities localisation. Elastic waves were generated and received using piezoelectric transducers. They were arranged in concentrated arrays distributed on the specimen surface. In this way almost whole specimen could be monitored using this combined distributed-concentrated transducer network. Excited elastic waves propagate and reflect from panel boundaries and discontinuities existing in the panel. Wave reflection were registered through the piezoelectric transducers and used in signal processing algorithm. Proposed processing algorithm consists of two parts: signal filtering and extraction of obstacles location. The first part was used in order to enhance signals by removing noise from them. Second part allowed to extract features connected with wave reflections from discontinuities. Extracted features damage influence maps were a basis to create damage influence maps. Damage maps indicated intensity of elastic wave reflections which corresponds to obstacles coordinates. Described signal processing algorithms were implemented in the MATLAB environment. It should be underlined that in this work results based only on experimental signals were presented.
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.
Multivariate statistical model for 3D image segmentation with application to medical images.
John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O
2003-12-01
In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).
MODIS Snow Cover Mapping Decision Tree Technique: Snow and Cloud Discrimination
NASA Technical Reports Server (NTRS)
Riggs, George A.; Hall, Dorothy K.
2010-01-01
Accurate mapping of snow cover continues to challenge cryospheric scientists and modelers. The Moderate-Resolution Imaging Spectroradiometer (MODIS) snow data products have been used since 2000 by many investigators to map and monitor snow cover extent for various applications. Users have reported on the utility of the products and also on problems encountered. Three problems or hindrances in the use of the MODIS snow data products that have been reported in the literature are: cloud obscuration, snow/cloud confusion, and snow omission errors in thin or sparse snow cover conditions. Implementation of the MODIS snow algorithm in a decision tree technique using surface reflectance input to mitigate those problems is being investigated. The objective of this work is to use a decision tree structure for the snow algorithm. This should alleviate snow/cloud confusion and omission errors and provide a snow map with classes that convey information on how snow was detected, e.g. snow under clear sky, snow tinder cloud, to enable users' flexibility in interpreting and deriving a snow map. Results of a snow cover decision tree algorithm are compared to the standard MODIS snow map and found to exhibit improved ability to alleviate snow/cloud confusion in some situations allowing up to about 5% increase in mapped snow cover extent, thus accuracy, in some scenes.
Single Point vs. Mapping Approach for Spectral Cytopathology (SCP)
Schubert, Jennifer M.; Mazur, Antonella I.; Bird, Benjamin; Miljković, Miloš; Diem, Max
2011-01-01
In this paper we describe the advantages of collecting infrared microspectral data in imaging mode opposed to point mode. Imaging data are processed using the PapMap algorithm, which co-adds pixel spectra that have been scrutinized for R-Mie scattering effects as well as other constraints. The signal-to-noise quality of PapMap spectra will be compared to point spectra for oral mucosa cells deposited onto low-e slides. Also the effects of software atmospheric correction will be discussed. Combined with the PapMap algorithm, data collection in imaging mode proves to be a superior method for spectral cytopathology. PMID:20449833
Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua
2018-01-24
Indoor occupants' positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans' position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization.
Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua
2018-01-01
Indoor occupants’ positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans’ position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization. PMID:29364188
Accurate estimation of short read mapping quality for next-generation genome sequencing
Ruffalo, Matthew; Koyutürk, Mehmet; Ray, Soumya; LaFramboise, Thomas
2012-01-01
Motivation: Several software tools specialize in the alignment of short next-generation sequencing reads to a reference sequence. Some of these tools report a mapping quality score for each alignment—in principle, this quality score tells researchers the likelihood that the alignment is correct. However, the reported mapping quality often correlates weakly with actual accuracy and the qualities of many mappings are underestimated, encouraging the researchers to discard correct mappings. Further, these low-quality mappings tend to correlate with variations in the genome (both single nucleotide and structural), and such mappings are important in accurately identifying genomic variants. Approach: We develop a machine learning tool, LoQuM (LOgistic regression tool for calibrating the Quality of short read mappings, to assign reliable mapping quality scores to mappings of Illumina reads returned by any alignment tool. LoQuM uses statistics on the read (base quality scores reported by the sequencer) and the alignment (number of matches, mismatches and deletions, mapping quality score returned by the alignment tool, if available, and number of mappings) as features for classification and uses simulated reads to learn a logistic regression model that relates these features to actual mapping quality. Results: We test the predictions of LoQuM on an independent dataset generated by the ART short read simulation software and observe that LoQuM can ‘resurrect’ many mappings that are assigned zero quality scores by the alignment tools and are therefore likely to be discarded by researchers. We also observe that the recalibration of mapping quality scores greatly enhances the precision of called single nucleotide polymorphisms. Availability: LoQuM is available as open source at http://compbio.case.edu/loqum/. Contact: matthew.ruffalo@case.edu. PMID:22962451
Metadata mapping and reuse in caBIG.
Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis
2009-02-05
This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG framework or other frameworks that use metadata repositories. The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG framework and potentially any framework that uses a metadata repository. This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG. This effort contributes to facilitating the development of interoperable systems within caBIG as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies.
NASA Technical Reports Server (NTRS)
Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.
1990-01-01
A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.
NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter
2002-01-01
The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.
Hammoudeh, Mohammad; Newman, Robert; Dennett, Christopher; Mount, Sarah; Aldabbas, Omar
2015-01-01
This paper presents a distributed information extraction and visualisation service, called the mapping service, for maximising information return from large-scale wireless sensor networks. Such a service would greatly simplify the production of higher-level, information-rich, representations suitable for informing other network services and the delivery of field information visualisations. The mapping service utilises a blend of inductive and deductive models to map sense data accurately using externally available knowledge. It utilises the special characteristics of the application domain to render visualisations in a map format that are a precise reflection of the concrete reality. This service is suitable for visualising an arbitrary number of sense modalities. It is capable of visualising from multiple independent types of the sense data to overcome the limitations of generating visualisations from a single type of sense modality. Furthermore, the mapping service responds dynamically to changes in the environmental conditions, which may affect the visualisation performance by continuously updating the application domain model in a distributed manner. Finally, a distributed self-adaptation function is proposed with the goal of saving more power and generating more accurate data visualisation. We conduct comprehensive experimentation to evaluate the performance of our mapping service and show that it achieves low communication overhead, produces maps of high fidelity, and further minimises the mapping predictive error dynamically through integrating the application domain model in the mapping service. PMID:26378539
Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan
1997-01-01
A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.
Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm
Baig, Fahd; Little, Max A.
2016-01-01
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525
An adaptive SVSF-SLAM algorithm to improve the success and solving the UGVs cooperation problem
NASA Astrophysics Data System (ADS)
Demim, Fethi; Nemra, Abdelkrim; Louadj, Kahina; Hamerlain, Mustapha; Bazoula, Abdelouahab
2018-05-01
This paper aims to present a Decentralised Cooperative Simultaneous Localization and Mapping (DCSLAM) solution based on 2D laser data using an Adaptive Covariance Intersection (ACI). The ACI-DCSLAM algorithm will be validated on a swarm of Unmanned Ground Vehicles (UGVs) receiving features to estimate the position and covariance of shared features before adding them to the global map. With the proposed solution, a group of (UGVs) will be able to construct a large reliable map and localise themselves within this map without any user intervention. The most popular solutions to this problem are the EKF-SLAM, Nonlinear H-infinity ? SLAM and the FAST-SLAM. The former suffers from two important problems which are the poor consistency caused by the linearization problem and the calculation of Jacobian. The second solution is the ? which is a very promising filter because it doesn't make any assumption about noise characteristics, while the latter is not suitable for real time implementation. Therefore, a new alternative solution based on the smooth variable structure filter (SVSF) is adopted. Cooperative adaptive SVSF-SLAM algorithm is proposed in this paper to solve the UGVs SLAM problem. Our main contribution consists in adapting the SVSF filter to solve the Decentralised Cooperative SLAM problem for multiple UGVs. The algorithms developed in this paper were implemented using two mobile robots Pioneer ?, equiped with 2D laser telemetry sensors. Good results are obtained by the Cooperative adaptive SVSF-SLAM algorithm compared to the Cooperative EKF/?-SLAM algorithms, especially when the noise is colored or affected by a variable bias. Simulation results confirm and show the efficiency of the proposed algorithm which is more robust, stable and adapted to real time applications.
NASA Astrophysics Data System (ADS)
Wibisana, H.; Zainab, S.; Dara K., A.
2018-01-01
Chlorophyll-a is one of the parameters used to detect the presence of fish populations, as well as one of the parameters to state the quality of a water. Research on chlorophyll concentrations has been extensively investigated as well as with chlorophyll-a mapping using remote sensing satellites. Mapping of chlorophyll concentration is used to obtain an optimal picture of the condition of waters that is often used as a fishing area by the fishermen. The role of remote sensing is a technological breakthrough in broadly monitoring the condition of waters. And in the process to get a complete picture of the aquatic conditions it would be used an algorithm that can provide an image of the concentration of chlorophyll at certain points scattered in the research area of capture fisheries. Remote sensing algorithms have been widely used by researchers to detect the presence of chlorophyll content, where the channels corresponding to the mapping of chlorophyll -concentrations from Landsat 8 images are canals 4, 3 and 2. With multiple channels from Landsat-8 satellite imagery used for chlorophyll detection, optimum algorithmic search can be formulated to obtain maximum results of chlorophyll-a concentration in the research area. From the calculation of remote sensing algorithm hence can be known the suitable algorithm for condition at coast of Pasuruan, where green channel give good enough correlation equal to R2 = 0,853 with algorithm for Chlorophyll-a (mg / m3) = 0,093 (R (-0) Red - 3,7049, from this result it can be concluded that there is a good correlation of the green channel that can illustrate the concentration of chlorophyll scattered along the coast of Pasuruan
Novel search algorithms for a mid-infrared spectral library of cotton contaminants.
Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A
2008-06-01
During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify correctly as many test spectra as the best standard algorithm without relying on human choice to select a standard algorithm to perform the searches.
An efficient hole-filling method based on depth map in 3D view generation
NASA Astrophysics Data System (ADS)
Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.; Ray, Taylor J.
2013-01-01
Two versions of airborne wind profiling algorithms for the pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia are presented. Each algorithm utilizes different number of line-of-sight (LOS) lidar returns while compensating the adverse effects of different coordinate systems between the aircraft and the Earth. One of the two algorithms APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) estimates wind products using two LOSs. The other algorithm utilizes five LOSs. The airborne lidar data were acquired during the NASA's Genesis and Rapid Intensification Processes (GRIP) campaign in 2010. The wind profile products from the two algorithms are compared with the dropsonde data to validate their results.
NASA Astrophysics Data System (ADS)
Burton, A. S.; Berger, E. L.; Locke, D. R.; Lewis, E. K.; Moore, J. F.
2018-04-01
Laser microprobe of surfaces utilizing a two laser setup whereby the desorption laser threshold is lowered below ionization, and the resulting neutral plume is examined using 157nm Vacuum Ultraviolet laser light for mass spec surface mapping.
Estimating the vegetation canopy height using micro-pulse photon-counting LiDAR data.
Nie, Sheng; Wang, Cheng; Xi, Xiaohuan; Luo, Shezhou; Li, Guoyuan; Tian, Jinyan; Wang, Hongtao
2018-05-14
The upcoming space-borne LiDAR satellite Ice, Cloud and land Elevation Satellite-2 (ICESat-2) is scheduled to launch in 2018. Different from the waveform LiDAR system onboard the ICESat, ICESat-2 will use a micro-pulse photon-counting LiDAR system. Thus new data processing algorithms are required to retrieve vegetation canopy height from photon-counting LiDAR data. The objective of this paper is to develop and validate an automated approach for better estimating vegetation canopy height. The new proposed method consists of three key steps: 1) filtering out the noise photons by an effective noise removal algorithm based on localized statistical analysis; 2) separating ground returns from canopy returns using an iterative photon classification algorithm, and then determining ground surface; 3) generating canopy-top surface and calculating vegetation canopy height based on canopy-top and ground surfaces. This automatic vegetation height estimation approach was tested to the simulated ICESat-2 data produced from Sigma Space LiDAR data and Multiple Altimeter Beam Experimental LiDAR (MABEL) data, and the retrieved vegetation canopy heights were validated by canopy height models (CHMs) derived from airborne discrete-return LiDAR data. Results indicated that the estimated vegetation canopy heights have a relatively strong correlation with the reference vegetation heights derived from airborne discrete-return LiDAR data (R 2 and RMSE values ranging from 0.639 to 0.810 and 4.08 m to 4.56 m respectively). This means our new proposed approach is appropriate for retrieving vegetation canopy height from micro-pulse photon-counting LiDAR data.
A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-01-01
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224
Liu, L L; Liu, M J; Ma, M
2015-09-28
The central task of this study was to mine the gene-to-medium relationship. Adequate knowledge of this relationship could potentially improve the accuracy of differentially expressed gene mining. One of the approaches to differentially expressed gene mining uses conventional clustering algorithms to identify the gene-to-medium relationship. Compared to conventional clustering algorithms, self-organization maps (SOMs) identify the nonlinear aspects of the gene-to-medium relationships by mapping the input space into another higher dimensional feature space. However, SOMs are not suitable for huge datasets consisting of millions of samples. Therefore, a new computational model, the Function Clustering Self-Organization Maps (FCSOMs), was developed. FCSOMs take advantage of the theory of granular computing as well as advanced statistical learning methodologies, and are built specifically for each information granule (a function cluster of genes), which are intelligently partitioned by the clustering algorithm provided by the DAVID_6.7 software platform. However, only the gene functions, and not their expression values, are considered in the fuzzy clustering algorithm of DAVID. Compared to the clustering algorithm of DAVID, these experimental results show a marked improvement in the accuracy of classification with the application of FCSOMs. FCSOMs can handle huge datasets and their complex classification problems, as each FCSOM (modeled for each function cluster) can be easily parallelized.
Net returns from segregating dark northern spring wheat by protein concentration during harvest
USDA-ARS?s Scientific Manuscript database
In-line, optical sensing has been developed for on-combine measurement and mapping of grain protein concentration (GPC). The objective of this study was to estimate changes in costs and net returns from using this technology for segregation of the dark northern spring (DNS) subclass of hard red whe...
NASA Astrophysics Data System (ADS)
Begg, John; Brackley, Hannah; Irwin, Marion; Grant, Helen; Berryman, Kelvin; Dellow, Grant; Scott, David; Jones, Katie; Barrell, David; Lee, Julie; Townsend, Dougal; Jacka, Mike; Harwood, Nick; McCahon, Ian; Christensen, Steve
2013-04-01
Following the damaging 4 Sept 2010 Mw7.1 Darfield Earthquake, the 22 Feb 2011 Christchurch Earthquake and subsequent damaging aftershocks, we completed a liquefaction hazard evaluation for c. 2700 km2 of the coastal Canterbury region. Its purpose was to distinguish at a regional scale areas of land that, in the event of strong ground shaking, may be susceptible to damaging liquefaction from areas where damaging liquefaction is unlikely. This information will be used by local government for defining liquefaction-related geotechnical investigation requirements for consent applications. Following a review of historic records of liquefaction and existing liquefaction assessment maps, we undertook comprehensive new work that included: a geologic context from existing geologic maps; geomorphic mapping using LiDAR and integrating existing soil map data; compilation of lithological data for the surficial 10 m from an extensive drillhole database; modelling of depth to unconfined groundwater from existing subsurface and surface water data. Integrating and honouring all these sources of information, we mapped areas underlain by materials susceptible to liquefaction (liquefaction-prone lithologies present, or likely, in the near-surface, with shallow unconfined groundwater) from areas unlikely to suffer widespread liquefaction damage. Comparison of this work with more detailed liquefaction susceptibility assessment based on closely spaced geotechnical probes in Christchurch City provides a level of confidence in these results. We tested our susceptibility map by assigning a matrix of liquefaction susceptibility rankings to lithologies recorded in drillhole logs and local groundwater depths, then applying peak ground accelerations for four earthquake scenarios from the regional probabilistic seismic hazard model (25 year return = 0.13g; 100 year return = 0.22g; 500 year return = 0.38g and 2500 year return = 0.6g). Our mapped boundary between liquefaction-prone areas and areas unlikely to sustain heavy damage proved sound. In addition, we compared mapped liquefaction extents (derived from post-earthquake aerial photographs) from the 4 Sept 2010 Mw7.1 and 22 Feb 2011 Mw6.2 earthquakes with our liquefaction susceptibility map. The overall area of liquefaction for these two earthquakes was similar, and statistics show that for the first (large regional) earthquake, c. 93% of mapped liquefaction fell within the liquefaction-prone area, and for the second (local, high peak ground acceleration) earthquake, almost 99% fell within the liquefaction-prone area. We conclude that basic geological and groundwater data when coupled with LiDAR data can usefully delineate areas susceptible to liquefaction from those unlikely to suffer damaging liquefaction. We believe that these techniques can be used successfully in many other cities around the world.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2006-01-01
The radial return and Mendelson methods for integrating the equations of classical plasticity, which appear independently in the literature, are shown to be identical. Both methods are presented in detail as are the specifics of their algorithmic implementation. Results illustrate the methods' equivalence across a range of conditions and address the question of when the methods require iteration in order for the plastic state to remain on the yield surface. FORTRAN code implementations of the radial return and Mendelson methods are provided in the appendix.
Mathematical modelling of risk reduction in reinsurance
NASA Astrophysics Data System (ADS)
Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.
2017-01-01
The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.
Fast mapping algorithm of lighting spectrum and GPS coordinates for a large area
NASA Astrophysics Data System (ADS)
Lin, Chih-Wei; Hsu, Ke-Fang; Hwang, Jung-Min
2016-09-01
In this study, we propose a fast rebuild technology for evaluating light quality in large areas. Outdoor light quality, which is measured by illuminance uniformity and the color rendering index, is difficult to conform after improvement. We develop an algorithm for a lighting quality mapping system and coordinates using a micro spectrometer and GPS tracker integrated with a quadcopter or unmanned aerial vehicle. After cruising at a constant altitude, lighting quality data is transmitted and immediately mapped to evaluate the light quality in a large area.
PID feedback controller used as a tactical asset allocation technique: The G.A.M. model
NASA Astrophysics Data System (ADS)
Gandolfi, G.; Sabatini, A.; Rossolini, M.
2007-09-01
The objective of this paper is to illustrate a tactical asset allocation technique utilizing the PID controller. The proportional-integral-derivative (PID) controller is widely applied in most industrial processes; it has been successfully used for over 50 years and it is used by more than 95% of the plants processes. It is a robust and easily understood algorithm that can provide excellent control performance in spite of the diverse dynamic characteristics of the process plant. In finance, the process plant, controlled by the PID controller, can be represented by financial market assets forming a portfolio. More specifically, in the present work, the plant is represented by a risk-adjusted return variable. Money and portfolio managers’ main target is to achieve a relevant risk-adjusted return in their managing activities. In literature and in the financial industry business, numerous kinds of return/risk ratios are commonly studied and used. The aim of this work is to perform a tactical asset allocation technique consisting in the optimization of risk adjusted return by means of asset allocation methodologies based on the PID model-free feedback control modeling procedure. The process plant does not need to be mathematically modeled: the PID control action lies in altering the portfolio asset weights, according to the PID algorithm and its parameters, Ziegler-and-Nichols-tuned, in order to approach the desired portfolio risk-adjusted return efficiently.
NASA Astrophysics Data System (ADS)
Hori, Yasuaki; Yasuno, Yoshiaki; Sakai, Shingo; Matsumoto, Masayuki; Sugawara, Tomoko; Madjarova, Violeta; Yamanari, Masahiro; Makita, Shuichi; Yasui, Takeshi; Araki, Tsutomu; Itoh, Masahide; Yatagai, Toyohiko
2006-03-01
A set of fully automated algorithms that is specialized for analyzing a three-dimensional optical coherence tomography (OCT) volume of human skin is reported. The algorithm set first determines the skin surface of the OCT volume, and a depth-oriented algorithm provides the mean epidermal thickness, distribution map of the epidermis, and a segmented volume of the epidermis. Subsequently, an en face shadowgram is produced by an algorithm to visualize the infundibula in the skin with high contrast. The population and occupation ratio of the infundibula are provided by a histogram-based thresholding algorithm and a distance mapping algorithm. En face OCT slices at constant depths from the sample surface are extracted, and the histogram-based thresholding algorithm is again applied to these slices, yielding a three-dimensional segmented volume of the infundibula. The dermal attenuation coefficient is also calculated from the OCT volume in order to evaluate the skin texture. The algorithm set examines swept-source OCT volumes of the skins of several volunteers, and the results show the high stability, portability and reproducibility of the algorithm.
Implementation of a parallel protein structure alignment service on cloud.
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.
Implementation of a Parallel Protein Structure Alignment Service on Cloud
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842
Suram, Santosh K.; Xue, Yexiang; Bai, Junwen; ...
2016-11-21
Rapid construction of phase diagrams is a central tenet of combinatorial materials science with accelerated materials discovery efforts often hampered by challenges in interpreting combinatorial X-ray diffraction data sets, which we address by developing AgileFD, an artificial intelligence algorithm that enables rapid phase mapping from a combinatorial library of X-ray diffraction patterns. AgileFD models alloying-based peak shifting through a novel expansion of convolutional nonnegative matrix factorization, which not only improves the identification of constituent phases but also maps their concentration and lattice parameter as a function of composition. By incorporating Gibbs’ phase rule into the algorithm, physically meaningful phase mapsmore » are obtained with unsupervised operation, and more refined solutions are attained by injecting expert knowledge of the system. The algorithm is demonstrated through investigation of the V–Mn–Nb oxide system where decomposition of eight oxide phases, including two with substantial alloying, provides the first phase map for this pseudoternary system. This phase map enables interpretation of high-throughput band gap data, leading to the discovery of new solar light absorbers and the alloying-based tuning of the direct-allowed band gap energy of MnV 2O 6. Lastly, the open-source family of AgileFD algorithms can be implemented into a broad range of high throughput workflows to accelerate materials discovery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suram, Santosh K.; Xue, Yexiang; Bai, Junwen
Rapid construction of phase diagrams is a central tenet of combinatorial materials science with accelerated materials discovery efforts often hampered by challenges in interpreting combinatorial X-ray diffraction data sets, which we address by developing AgileFD, an artificial intelligence algorithm that enables rapid phase mapping from a combinatorial library of X-ray diffraction patterns. AgileFD models alloying-based peak shifting through a novel expansion of convolutional nonnegative matrix factorization, which not only improves the identification of constituent phases but also maps their concentration and lattice parameter as a function of composition. By incorporating Gibbs’ phase rule into the algorithm, physically meaningful phase mapsmore » are obtained with unsupervised operation, and more refined solutions are attained by injecting expert knowledge of the system. The algorithm is demonstrated through investigation of the V–Mn–Nb oxide system where decomposition of eight oxide phases, including two with substantial alloying, provides the first phase map for this pseudoternary system. This phase map enables interpretation of high-throughput band gap data, leading to the discovery of new solar light absorbers and the alloying-based tuning of the direct-allowed band gap energy of MnV 2O 6. Lastly, the open-source family of AgileFD algorithms can be implemented into a broad range of high throughput workflows to accelerate materials discovery.« less
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727
TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oines, A; Oines, A; Kilian-Meneghin, J
2016-06-15
Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphologymore » from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less
Quantifying the tibiofemoral joint space using x-ray tomosynthesis.
Kalinosky, Benjamin; Sabol, John M; Piacsek, Kelly; Heckel, Beth; Gilat Schmidt, Taly
2011-12-01
Digital x-ray tomosynthesis (DTS) has the potential to provide 3D information about the knee joint in a load-bearing posture, which may improve diagnosis and monitoring of knee osteoarthritis compared with projection radiography, the current standard of care. Manually quantifying and visualizing the joint space width (JSW) from 3D tomosynthesis datasets may be challenging. This work developed a semiautomated algorithm for quantifying the 3D tibiofemoral JSW from reconstructed DTS images. The algorithm was validated through anthropomorphic phantom experiments and applied to three clinical datasets. A user-selected volume of interest within the reconstructed DTS volume was enhanced with 1D multiscale gradient kernels. The edge-enhanced volumes were divided by polarity into tibial and femoral edge maps and combined across kernel scales. A 2D connected components algorithm was performed to determine candidate tibial and femoral edges. A 2D joint space width map (JSW) was constructed to represent the 3D tibiofemoral joint space. To quantify the algorithm accuracy, an adjustable knee phantom was constructed, and eleven posterior-anterior (PA) and lateral DTS scans were acquired with the medial minimum JSW of the phantom set to 0-5 mm in 0.5 mm increments (VolumeRad™, GE Healthcare, Chalfont St. Giles, United Kingdom). The accuracy of the algorithm was quantified by comparing the minimum JSW in a region of interest in the medial compartment of the JSW map to the measured phantom setting for each trial. In addition, the algorithm was applied to DTS scans of a static knee phantom and the JSW map compared to values estimated from a manually segmented computed tomography (CT) dataset. The algorithm was also applied to three clinical DTS datasets of osteoarthritic patients. The algorithm segmented the JSW and generated a JSW map for all phantom and clinical datasets. For the adjustable phantom, the estimated minimum JSW values were plotted against the measured values for all trials. A linear fit estimated a slope of 0.887 (R² = 0.962) and a mean error across all trials of 0.34 mm for the PA phantom data. The estimated minimum JSW values for the lateral adjustable phantom acquisitions were found to have low correlation to the measured values (R² = 0.377), with a mean error of 2.13 mm. The error in the lateral adjustable-phantom datasets appeared to be caused by artifacts due to unrealistic features in the phantom bones. JSW maps generated by DTS and CT varied by a mean of 0.6 mm and 0.8 mm across the knee joint, for PA and lateral scans. The tibial and femoral edges were successfully segmented and JSW maps determined for PA and lateral clinical DTS datasets. A semiautomated method is presented for quantifying the 3D joint space in a 2D JSW map using tomosynthesis images. The proposed algorithm quantified the JSW across the knee joint to sub-millimeter accuracy for PA tomosynthesis acquisitions. Overall, the results suggest that x-ray tomosynthesis may be beneficial for diagnosing and monitoring disease progression or treatment of osteoarthritis by providing quantitative images of JSW in the load-bearing knee.
Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker
2017-01-01
In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.
Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen
2010-01-01
The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-02-01
In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.
Transcript mapping for handwritten English documents
NASA Astrophysics Data System (ADS)
Jose, Damien; Bharadwaj, Anurag; Govindaraju, Venu
2008-01-01
Transcript mapping or text alignment with handwritten documents is the automatic alignment of words in a text file with word images in a handwritten document. Such a mapping has several applications in fields ranging from machine learning where large quantities of truth data are required for evaluating handwriting recognition algorithms, to data mining where word image indexes are used in ranked retrieval of scanned documents in a digital library. The alignment also aids "writer identity" verification algorithms. Interfaces which display scanned handwritten documents may use this alignment to highlight manuscript tokens when a person examines the corresponding transcript word. We propose an adaptation of the True DTW dynamic programming algorithm for English handwritten documents. The integration of the dissimilarity scores from a word-model word recognizer and Levenshtein distance between the recognized word and lexicon word, as a cost metric in the DTW algorithm leading to a fast and accurate alignment, is our primary contribution. Results provided, confirm the effectiveness of our approach.
Flood inundation extent mapping based on block compressed tracing
NASA Astrophysics Data System (ADS)
Shen, Dingtao; Rui, Yikang; Wang, Jiechen; Zhang, Yu; Cheng, Liang
2015-07-01
Flood inundation extent, depth, and duration are important factors affecting flood hazard evaluation. At present, flood inundation analysis is based mainly on a seeded region-growing algorithm, which is an inefficient process because it requires excessive recursive computations and it is incapable of processing massive datasets. To address this problem, we propose a block compressed tracing algorithm for mapping the flood inundation extent, which reads the DEM data in blocks before transferring them to raster compression storage. This allows a smaller computer memory to process a larger amount of data, which solves the problem of the regular seeded region-growing algorithm. In addition, the use of a raster boundary tracing technique allows the algorithm to avoid the time-consuming computations required by the seeded region-growing. Finally, we conduct a comparative evaluation in the Chin-sha River basin, results show that the proposed method solves the problem of flood inundation extent mapping based on massive DEM datasets with higher computational efficiency than the original method, which makes it suitable for practical applications.
Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.
Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer
2015-11-01
Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.
Effects of Regolith Properties on UV/VIS Spectra and Implications for Lunar Remote Sensing
NASA Astrophysics Data System (ADS)
Coman, Ecaterina Oana
Lunar regolith chemistry, mineralogy, various maturation factors, and grain size dominate the reflectance of the lunar surface at ultraviolet (UV) to visible (VIS) wavelengths. These regolith properties leave unique fingerprints on reflectance spectra in the form of varied spectral shapes, reflectance intensity values, and absorption bands. With the addition of returned lunar soils from the Apollo and Luna missions as ground truth, these spectral fingerprints can be used to derive maps of global lunar chemistry or mineralogy to analyze the range of basalt types on the Moon, their spatial distribution, and source regions for clues to lunar formation history and evolution. The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) is the first lunar imager to detect bands at UV wavelengths (321 and 360 nm) in addition to visible bands (415, 566, 604, 643, and 689 nm). This dissertation uses a combination of laboratory and remote sensing studies to examine the relation between TiO2 concentration and WAC UV/VIS spectral ratios and to test the effects of variations in lunar chemistry, mineralogy, and soil maturity on ultraviolet and visible wavelength reflectance. Chapter 1 presents an introduction to the dissertation that includes some background in lunar mineralogy and remote sensing. Chapter 2 covers coordinated analyses of returned lunar soils using UV-VIS spectroscopy, X-ray diffraction, and micro X-ray fluorescence. Chapter 3 contains comparisons of local and global remote sensing observations of the Moon using LROC WAC and Clementine UVVIS TiO2 detection algorithms and Lunar Prospector (LP) Gamma Ray Spectrometer (GRS)-derived FeO and TiO2 concentrations. While the data shows effects from maturity and FeO on the UV/VIS detection algorithm, a UV/VIS relationship remains a simple yet accurate method for TiO2 detection on the Moon.
NASA Astrophysics Data System (ADS)
Cieszynski, Lukasz; Furmanczyk, Kazimierz
2017-04-01
Bathymetry data for the coastal zone of the Baltic Sea are usually created in profiles based on echo sounding measurements. However, in the shallow coastal zone (up to 4 m depth), the quality and accuracy of data is insufficient because of the spatial variability of the seabed. The green laser - LIDAR - can comprise a solution for studies of such shallow areas. However, this method is still an expensive one and that is why we have decided to use the RGB digital aerial photographs to create a model for mapping the seabed of the shallow coastal zone. So far, in the 60's, researchers in the USA (Musgrove, 1969) and Russia (Zdanowicz, 1963) developed the first method of bathymetry determining from aerial panchromatic (black-white) photographs. This method was adapted for the polish conditions by Furmanczyk in 1975 and in 2014 we have returned to his concept using more advanced techniques of recording and image processing. In our study, we propose to determine the bathymetry in shallow coastal zone of the Baltic Sea by using the digital vertical aerial photographs (both single and multi-channel spectral). These photos are the high-resolution matrix (10 cm per pixel) containing values of the grey level in the individual spectral bands (RGB). This gives great possibilities to determine the bathymetry in order to analyze the changes in the marine coastal zone. Comparing the digital bathymetry maps - obtained by proposed method - in the following periods, you can develop differential maps, which reflect the movements of sea-bottom sediments. This can be used to indicate the most dynamic regions in the examined area. The model is based on the image pixel values and relative depths measured in situ (in selected checkpoints). As a result, the relation of the pixel brightness and sea depth (the algorithm) was defined. Using the algorithm, depth calculations for the whole scene were done and high resolution bathymetric map created. However, the algorithm requires numbers of adjustments resulting from, e.g., the phenomenon of vignetting, distribution of light, or the collapse of the rays of light at the atmosphere - sea interface. We have developed the algorithm with correction formulas and created a final model in MATLAB. It allows one to obtain three-dimensional bathymetry visualization for a specific region from a digital color aerial photograph. This model enables to determine the bathymetry of the most dynamic areas in the marine coastal zone up to 3-4 meters depth with a relatively good accuracy. In addition, the possibility to take pictures from the drone instead of a plane, significantly reduces the cost of the process. In the poster presentation, we will present the model and its results for the area of the Polish west coast. 1. Musgrove R,G., 1969. Photometry for interpretation. Photogrametric Engineering No. 10. 2. Furmańczyk K., 1975. Możliwości praktycznego zastosowania metody fotogrametrycznej do określania głębokości w strefie brzegowej morza. Gdańsk. 3. Zdanowicz W.G., 1963. Primienienije aerometodow dlia issledowanija moria. Leningrad.
Image defog algorithm based on open close filter and gradient domain recursive bilateral filter
NASA Astrophysics Data System (ADS)
Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen
2017-11-01
To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.
He, Bo; Zhang, Shujing; Yan, Tianhong; Zhang, Tao; Liang, Yan; Zhang, Hongjin
2011-01-01
Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale simultaneous localization and mapping (SLAM) and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF) and extended information filter (EIF), this paper presents a combined SLAM-an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.
NASA Astrophysics Data System (ADS)
Loughman, Robert; Bhartia, Pawan K.; Chen, Zhong; Xu, Philippe; Nyaku, Ernest; Taha, Ghassan
2018-05-01
The theoretical basis of the Ozone Mapping and Profiler Suite (OMPS) Limb Profiler (LP) Version 1 aerosol extinction retrieval algorithm is presented. The algorithm uses an assumed bimodal lognormal aerosol size distribution to retrieve aerosol extinction profiles at 675 nm from OMPS LP radiance measurements. A first-guess aerosol extinction profile is updated by iteration using the Chahine nonlinear relaxation method, based on comparisons between the measured radiance profile at 675 nm and the radiance profile calculated by the Gauss-Seidel limb-scattering (GSLS) radiative transfer model for a spherical-shell atmosphere. This algorithm is discussed in the context of previous limb-scattering aerosol extinction retrieval algorithms, and the most significant error sources are enumerated. The retrieval algorithm is limited primarily by uncertainty about the aerosol phase function. Horizontal variations in aerosol extinction, which violate the spherical-shell atmosphere assumed in the version 1 algorithm, may also limit the quality of the retrieved aerosol extinction profiles significantly.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Experiments with conjugate gradient algorithms for homotopy curve tracking
NASA Technical Reports Server (NTRS)
Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.
1991-01-01
There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
2017-01-01
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Polynomial approximation of Poincare maps for Hamiltonian system
NASA Technical Reports Server (NTRS)
Froeschle, Claude; Petit, Jean-Marc
1992-01-01
Different methods are proposed and tested for transforming a non-linear differential system, and more particularly a Hamiltonian one, into a map without integrating the whole orbit as in the well-known Poincare return map technique. We construct piecewise polynomial maps by coarse-graining the phase-space surface of section into parallelograms and using either only values of the Poincare maps at the vertices or also the gradient information at the nearest neighbors to define a polynomial approximation within each cell. The numerical experiments are in good agreement with both the real symplectic and Poincare maps.
Özarslan, Evren; Koay, Cheng Guan; Shepherd, Timothy M; Komlosh, Michal E; İrfanoğlu, M Okan; Pierpaoli, Carlo; Basser, Peter J
2013-09-01
Diffusion-weighted magnetic resonance (MR) signals reflect information about underlying tissue microstructure and cytoarchitecture. We propose a quantitative, efficient, and robust mathematical and physical framework for representing diffusion-weighted MR imaging (MRI) data obtained in "q-space," and the corresponding "mean apparent propagator (MAP)" describing molecular displacements in "r-space." We also define and map novel quantitative descriptors of diffusion that can be computed robustly using this MAP-MRI framework. We describe efficient analytical representation of the three-dimensional q-space MR signal in a series expansion of basis functions that accurately describes diffusion in many complex geometries. The lowest order term in this expansion contains a diffusion tensor that characterizes the Gaussian displacement distribution, equivalent to diffusion tensor MRI (DTI). Inclusion of higher order terms enables the reconstruction of the true average propagator whose projection onto the unit "displacement" sphere provides an orientational distribution function (ODF) that contains only the orientational dependence of the diffusion process. The representation characterizes novel features of diffusion anisotropy and the non-Gaussian character of the three-dimensional diffusion process. Other important measures this representation provides include the return-to-the-origin probability (RTOP), and its variants for diffusion in one- and two-dimensions-the return-to-the-plane probability (RTPP), and the return-to-the-axis probability (RTAP), respectively. These zero net displacement probabilities measure the mean compartment (pore) volume and cross-sectional area in distributions of isolated pores irrespective of the pore shape. MAP-MRI represents a new comprehensive framework to model the three-dimensional q-space signal and transform it into diffusion propagators. Experiments on an excised marmoset brain specimen demonstrate that MAP-MRI provides several novel, quantifiable parameters that capture previously obscured intrinsic features of nervous tissue microstructure. This should prove helpful for investigating the functional organization of normal and pathologic nervous tissue. Copyright © 2013 Elsevier Inc. All rights reserved.
A unified approach to VLSI layout automation and algorithm mapping on processor arrays
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Pattabiraman, S.; Srinivasan, Vinoo N.
1993-01-01
Development of software tools for designing supercomputing systems is highly complex and cost ineffective. To tackle this a special purpose PAcube silicon compiler which integrates different design levels from cell to processor arrays has been proposed. As a part of this, we present in this paper a novel methodology which unifies the problems of Layout Automation and Algorithm Mapping.
NASA Technical Reports Server (NTRS)
Saatchi, Sasan; Rignot, Eric; Vanzyl, Jakob
1995-01-01
In recent years, monitoring vegetation biomass over various climate zones has become the primary focus of several studies interested in assessing the role of the ecosystem responses to climate change and human activities. Airborne and spaceborne Synthetic Aperture Radar (SAR) systems provide a useful tool to directly estimate biomass due to its sensitivity to structural and moisture characteristics of vegetation canopies. Even though the sensitivity of SAR data to total aboveground biomass has been successfully demonstrated in many controlled experiments over boreal forests and forest plantations, so far, no biomass estimation algorithm has been developed. This is mainly due to the fact that the SAR data, even at lowest frequency (P-band) saturates at biomass levels of about 200 tons/ha, and the structure and moisture information in the SAR signal forces the estimation algorithm to be forest type dependent. In this paper, we discuss the development of a hybrid forest biomass algorithm which uses a SAR derived land cover map in conjunction with a forest backscatter model and an inversion algorithm to estimate forest canopy water content. It is shown that unlike the direct biomass estimation from SAR data, the estimation of water content does not depend on the seasonal and/or environmental conditions. The total aboveground biomass can then be derived from canopy water content for each type of forest by incorporating other ecological information. Preliminary results from this technique over several boreal forest stands indicate that (1) the forest biomass can be estimated with reasonable accuracy, and (2) the saturation level of the SAR signal can be enhanced by separating the crown and trunk biomass in the inversion algorithm. We have used the JPL AIRSAR data over BOREAS southern study area to test the algorithm and to generate regional scale water content and biomass maps. The results are compared with ground data and the sources of errors are discussed. Several SAR images in synoptic modes are used to generate the parameter maps. The maps are then combined to generate mosaic maps over the BOREAS modeling grid.
Fusion of sensor geometry into additive strain fields measured with sensing skin
NASA Astrophysics Data System (ADS)
Downey, Austin; Sadoughi, Mohammadkazem; Laflamme, Simon; Hu, Chao
2018-07-01
Recently, numerous studies have been conducted on flexible skin-like membranes for the cost effective monitoring of large-scale structures. The authors have proposed a large-area electronic consisting of a soft elastomeric capacitor (SEC) that transduces a structure’s strain into a measurable change in capacitance. Arranged in a network configuration, SECs deployed onto the surface of a structure could be used to reconstruct strain maps. Several regression methods have been recently developed with the purpose of reconstructing such maps, but all these algorithms assumed that each SEC-measured strain located at its geometric center. This assumption may not be realistic since an SEC measures the average strain value of the whole area covered by the sensor. One solution is to reduce the size of each SEC, but this would also increase the number of required sensors needed to cover the large-scale structure, therefore increasing the need for the power and data acquisition capabilities. Instead, this study proposes an algorithm that accounts for the sensor’s strain averaging feature by adjusting the strain measurements and constructing a full-field strain map using the kriging interpolation method. The proposed algorithm fuses the geometry of an SEC sensor into the strain map reconstruction in order to adaptively adjust the average kriging-estimated strain of the area monitored by the sensor to the signal. Results show that by considering the sensor geometry, in addition to the sensor signal and location, the proposed strain map adjustment algorithm is capable of producing more accurate full-field strain maps than the traditional spatial interpolation method that considered only signal and location.
NASA Astrophysics Data System (ADS)
Drzewiecki, Wojciech
2017-12-01
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
Terrain mapping and control of unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Kang, Yeonsik
In this thesis, methods for terrain mapping and control of unmanned aerial vehicles (UAVs) are proposed. First, robust obstacle detection and tracking algorithm are introduced to eliminate the clutter noise uncorrelated with the real obstacle. This is an important problem since most types of sensor measurements are vulnerable to noise. In order to eliminate such noise, a Kalman filter-based interacting multiple model (IMM) algorithm is employed to effectively detect obstacles and estimate their positions precisely. Using the outcome of the IMM-based obstacle detection algorithm, a new method of building a probabilistic occupancy grid map is proposed based on Bayes rule in probability theory. Since the proposed map update law uses the outputs of the IMM-based obstacle detection algorithm, simultaneous tracking of moving targets and mapping of stationary obstacles are possible. This can be helpful especially in a noisy outdoor environment where different types of obstacles exist. Another feature of the algorithm is its capability to eliminate clutter noise as well as measurement noise. The proposed algorithm is simulated in Matlab using realistic sensor models. The results show close agreement with the layout of real obstacles. An efficient method called "quadtree" is used to process massive geographical information in a convenient manner. The algorithm is evaluated in a realistic simulation environment called RIPTIDE, which the NASA Ames Research Center developed to access the performance of complicated software for UAVs. Supposing that a UAV is equipped with abovementioned obstacle detection and mapping algorithm, the control problem of a small fixed-wing UAV is studied. A Nonlinear Model Predictive Control (NMPC is designed as a high level controller for the fixed-wing UAV using a kinematic model of the UAV. The kinematic model is employed because of the assumption that there exist low level controls on the UAV. The UAV dynamics are nonlinear with input constraints which is the main challenge explored in this thesis. The control objective of the NMPC is determined to track a desired line, and the analysis of the designed NMPC's stability is followed to find the conditions that can assure stability. Then, the control objective is extended to track adjoined multiple line segments with obstacle avoidance capability. In simulation, the performance of the NMPC is superb with fast convergence and small overshoot. The computation time is not a burden for a fixed-wing UAV controller with a Pentium level on-board computer that provides a reasonable control update rate.
The admissible portfolio selection problem with transaction costs and an improved PSO algorithm
NASA Astrophysics Data System (ADS)
Chen, Wei; Zhang, Wei-Guo
2010-05-01
In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Is Log Ratio a Good Value for Measuring Return in Stock Investments?
NASA Astrophysics Data System (ADS)
Ultsch, Alfred
Measuring the rate of return is an important issue for theory and practice of investments in the stock market. A common measure for rate of return is the logarithm of the ratio of successive prices (LogRatio). In this paper it is shown that LogRatio as well as arithmetic return rate (Ratio) have several disadvantages. As an alternative relative differences (RelDiff) are proposed to measure return. The stability against numerical and rounding errors of RelDiff is much better than for LogRatios and Ratio). RelDiff values are identical to LogRatios and Return for small absolutes. The usage of RelDiff maps returns to a finite range. For most subsequent analyses this is a big advantage. The usefulness of the approach is demonstrated on daily return rates of a large set of actual stocks. It is shown that returns can be modeled with a very simple mixture of distributions in great precision using Relative differences.
Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.
Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin
2018-03-01
The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Bi-objective integer programming for RNA secondary structure prediction with pseudoknots.
Legendre, Audrey; Angel, Eric; Tahi, Fariza
2018-01-15
RNA structure prediction is an important field in bioinformatics, and numerous methods and tools have been proposed. Pseudoknots are specific motifs of RNA secondary structures that are difficult to predict. Almost all existing methods are based on a single model and return one solution, often missing the real structure. An alternative approach would be to combine different models and return a (small) set of solutions, maximizing its quality and diversity in order to increase the probability that it contains the real structure. We propose here an original method for predicting RNA secondary structures with pseudoknots, based on integer programming. We developed a generic bi-objective integer programming algorithm allowing to return optimal and sub-optimal solutions optimizing simultaneously two models. This algorithm was then applied to the combination of two known models of RNA secondary structure prediction, namely MEA and MFE. The resulting tool, called BiokoP, is compared with the other methods in the literature. The results show that the best solution (structure with the highest F 1 -score) is, in most cases, given by BiokoP. Moreover, the results of BiokoP are homogeneous, regardless of the pseudoknot type or the presence or not of pseudoknots. Indeed, the F 1 -scores are always higher than 70% for any number of solutions returned. The results obtained by BiokoP show that combining the MEA and the MFE models, as well as returning several optimal and several sub-optimal solutions, allow to improve the prediction of secondary structures. One perspective of our work is to combine better mono-criterion models, in particular to combine a model based on the comparative approach with the MEA and the MFE models. This leads to develop in the future a new multi-objective algorithm to combine more than two models. BiokoP is available on the EvryRNA platform: https://EvryRNA.ibisc.univ-evry.fr .
NASA Astrophysics Data System (ADS)
Câmara, F.; Oliveira, J.; Hormigo, T.; Araújo, J.; Ribeiro, R.; Falcão, A.; Gomes, M.; Dubois-Matra, O.; Vijendran, S.
2015-06-01
This paper discusses the design and evaluation of data fusion strategies to perform tiered fusion of several heterogeneous sensors and a priori data. The aim is to increase robustness and performance of hazard detection and avoidance systems, while enabling safe planetary and small body landings anytime, anywhere. The focus is on Mars and asteroid landing mission scenarios and three distinct data fusion algorithms are introduced and compared. The first algorithm consists of a hybrid camera-LIDAR hazard detection and avoidance system, the H2DAS, in which data fusion is performed at both sensor-level data (reconstruction of the point cloud obtained with a scanning LIDAR using the navigation motion states and correcting the image for motion compensation using IMU data), feature-level data (concatenation of multiple digital elevation maps, obtained from consecutive LIDAR images, to achieve higher accuracy and resolution maps while enabling relative positioning) as well as decision-level data (fusing hazard maps from multiple sensors onto a single image space, with a single grid orientation and spacing). The second method presented is a hybrid reasoning fusion, the HRF, in which innovative algorithms replace the decision-level functions of the previous method, by combining three different reasoning engines—a fuzzy reasoning engine, a probabilistic reasoning engine and an evidential reasoning engine—to produce safety maps. Finally, the third method presented is called Intelligent Planetary Site Selection, the IPSIS, an innovative multi-criteria, dynamic decision-level data fusion algorithm that takes into account historical information for the selection of landing sites and a piloting function with a non-exhaustive landing site search capability, i.e., capable of finding local optima by searching a reduced set of global maps. All the discussed data fusion strategies and algorithms have been integrated, verified and validated in a closed-loop simulation environment. Monte Carlo simulation campaigns were performed for the algorithms performance assessment and benchmarking. The simulations results comprise the landing phases of Mars and Phobos landing mission scenarios.
Network-level accident-mapping: Distance based pattern matching using artificial neural network.
Deka, Lipika; Quddus, Mohammed
2014-04-01
The objective of an accident-mapping algorithm is to snap traffic accidents onto the correct road segments. Assigning accidents onto the correct segments facilitate to robustly carry out some key analyses in accident research including the identification of accident hot-spots, network-level risk mapping and segment-level accident risk modelling. Existing risk mapping algorithms have some severe limitations: (i) they are not easily 'transferable' as the algorithms are specific to given accident datasets; (ii) they do not perform well in all road-network environments such as in areas of dense road network; and (iii) the methods used do not perform well in addressing inaccuracies inherent in and type of road environment. The purpose of this paper is to develop a new accident mapping algorithm based on the common variables observed in most accident databases (e.g. road name and type, direction of vehicle movement before the accident and recorded accident location). The challenges here are to: (i) develop a method that takes into account uncertainties inherent to the recorded traffic accident data and the underlying digital road network data, (ii) accurately determine the type and proportion of inaccuracies, and (iii) develop a robust algorithm that can be adapted for any accident set and road network of varying complexity. In order to overcome these challenges, a distance based pattern-matching approach is used to identify the correct road segment. This is based on vectors containing feature values that are common in the accident data and the network data. Since each feature does not contribute equally towards the identification of the correct road segments, an ANN approach using the single-layer perceptron is used to assist in "learning" the relative importance of each feature in the distance calculation and hence the correct link identification. The performance of the developed algorithm was evaluated based on a reference accident dataset from the UK confirming that the accuracy is much better than other methods. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development
NASA Astrophysics Data System (ADS)
Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.
2012-04-01
The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
NASA Astrophysics Data System (ADS)
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
NASA Technical Reports Server (NTRS)
Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew
2017-01-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2017-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
Karimi Moridani, Mohammad; Setarehdan, Seyed Kamaledin; Motie Nasrabadi, Ali; Hajinasrollah, Esmaeil
2016-01-01
Intensive care unit (ICU) patients are at risk of in-ICU morbidities and mortality, making specific systems for identifying at-risk patients a necessity for improving clinical care. This study presents a new method for predicting in-hospital mortality using heart rate variability (HRV) collected from the times of a patient's ICU stay. In this paper, a HRV time series processing based method is proposed for mortality prediction of ICU cardiovascular patients. HRV signals were obtained measuring R-R time intervals. A novel method, named return map, is then developed that reveals useful information from the HRV time series. This study also proposed several features that can be extracted from the return map, including the angle between two vectors, the area of triangles formed by successive points, shortest distance to 45° line and their various combinations. Finally, a thresholding technique is proposed to extract the risk period and to predict mortality. The data used to evaluate the proposed algorithm obtained from 80 cardiovascular ICU patients, from the first 48 h of the first ICU stay of 40 males and 40 females. This study showed that the angle feature has on average a sensitivity of 87.5% (with 12 false alarms), the area feature has on average a sensitivity of 89.58% (with 10 false alarms), the shortest distance feature has on average a sensitivity of 85.42% (with 14 false alarms) and, finally, the combined feature has on average a sensitivity of 92.71% (with seven false alarms). The results showed that the last half an hour before the patient's death is very informative for diagnosing the patient's condition and to save his/her life. These results confirm that it is possible to predict mortality based on the features introduced in this paper, relying on the variations of the HRV dynamic characteristics.
NASA Astrophysics Data System (ADS)
Shi, Y.; Gorban, A. N.; Y Yang, T.
2014-03-01
This case study tests the possibility of prediction for 'success' (or 'winner') components of four stock & shares market indices in a time period of three years from 02-Jul-2009 to 29-Jun-2012.We compare their performance ain two time frames: initial frame three months at the beginning (02/06/2009-30/09/2009) and the final three month frame (02/04/2012-29/06/2012).To label the components, average price ratio between two time frames in descending order is computed. The average price ratio is defined as the ratio between the mean prices of the beginning and final time period. The 'winner' components are referred to the top one third of total components in the same order as average price ratio it means the mean price of final time period is relatively higher than the beginning time period. The 'loser' components are referred to the last one third of total components in the same order as they have higher mean prices of beginning time period. We analyse, is there any information about the winner-looser separation in the initial fragments of the daily closing prices log-returns time series.The Leave-One-Out Cross-Validation with k-NN algorithm is applied on the daily log-return of components using a distance and proximity in the experiment. By looking at the error analysis, it shows that for HANGSENG and DAX index, there are clear signs of possibility to evaluate the probability of long-term success. The correlation distance matrix histograms and 2-D/3-D elastic maps generated from ViDaExpert show that the 'winner' components are closer to each other and 'winner'/'loser' components are separable on elastic maps for HANGSENG and DAX index while for the negative possibility indices, there is no sign of separation.
Metadata mapping and reuse in caBIG™
Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis
2009-01-01
Background This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG™). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG™ framework or other frameworks that use metadata repositories. Results The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG™ framework and potentially any framework that uses a metadata repository. Conclusion This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG™. This effort contributes to facilitating the development of interoperable systems within caBIG™ as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies. PMID:19208192
NASA Astrophysics Data System (ADS)
Farda, N. M.
2017-12-01
Coastal wetlands provide ecosystem services essential to people and the environment. Changes in coastal wetlands, especially on land use, are important to monitor by utilizing multi-temporal imagery. The Google Earth Engine (GEE) provides many machine learning algorithms (10 algorithms) that are very useful for extracting land use from imagery. The research objective is to explore machine learning in Google Earth Engine and its accuracy for multi-temporal land use mapping of coastal wetland area. Landsat 3 MSS (1978), Landsat 5 TM (1991), Landsat 7 ETM+ (2001), and Landsat 8 OLI (2014) images located in Segara Anakan lagoon are selected to represent multi temporal images. The input for machine learning are visible and near infrared bands, PCA band, invers PCA bands, bare soil index, vegetation index, wetness index, elevation from ASTER GDEM, and GLCM (Harralick) texture, and also polygon samples in 140 locations. There are 10 machine learning algorithms applied to extract coastal wetlands land use from Landsat imagery. The algorithms are Fast Naive Bayes, CART (Classification and Regression Tree), Random Forests, GMO Max Entropy, Perceptron (Multi Class Perceptron), Winnow, Voting SVM, Margin SVM, Pegasos (Primal Estimated sub-GrAdient SOlver for Svm), IKPamir (Intersection Kernel Passive Aggressive Method for Information Retrieval, SVM). Machine learning in Google Earth Engine are very helpful in multi-temporal land use mapping, the highest accuracy for land use mapping of coastal wetland is CART with 96.98 % Overall Accuracy using K-Fold Cross Validation (K = 10). GEE is particularly useful for multi-temporal land use mapping with ready used image and classification algorithms, and also very challenging for other applications.
LADOTD 24-Hour rainfall frequency maps and I-D-F curves : summary report.
DOT National Transportation Integrated Search
1991-08-01
Maximum annual 24-hour maps and Intensity-Duration-Frequency (I-D-F) curves for return periods of 2, 5, 10, 25, 50 and 100 years were developed using hourly precipitation data. Data from 92 rain gauges for the period of 1948 to 1987 were compiled and...
Pose and motion recovery from feature correspondences and a digital terrain map.
Lerner, Ronen; Rivlin, Ehud; Rotstein, Héctor P
2006-09-01
A novel algorithm for pose and motion estimation using corresponding features and a Digital Terrain Map is proposed. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables the elimination of the ambiguity present in vision-based algorithms for motion recovery. As a consequence, the absolute position and orientation of a camera can be recovered with respect to the external reference frame. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. Explicit reconstruction of the 3D world is not required. When considering a number of feature points, the resulting constraints can be solved using nonlinear optimization in terms of position, orientation, and motion. Such a procedure requires an initial guess of these parameters, which can be obtained from dead-reckoning or any other source. The feasibility of the algorithm is established through extensive experimentation. Performance is compared with a state-of-the-art alternative algorithm, which intermediately reconstructs the 3D structure and then registers it to the DTM. A clear advantage for the novel algorithm is demonstrated in variety of scenarios.
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
NASA Technical Reports Server (NTRS)
Chopping, Mark; North, Malcolm; Chen, Jiquan; Schaaf, Crystal B.; Blair, J. Bryan; Martonchik, John V.; Bull, Michael A.
2012-01-01
This study addresses the retrieval of spatially contiguous canopy cover and height estimates in southwestern USforests via inversion of a geometric-optical (GO) model against surface bidirectional reflectance factor (BRF) estimates from the Multi-angle Imaging SpectroRadiometer (MISR). Model inversion can provide such maps if good estimates of the background bidirectional reflectance distribution function (BRDF) are available. The study area is in the Sierra National Forest in the Sierra Nevada of California. Tree number density, mean crown radius, and fractional cover reference estimates were obtained via analysis of QuickBird 0.6 m spatial resolution panchromatic imagery usingthe CANopy Analysis with Panchromatic Imagery (CANAPI) algorithm, while RH50, RH75 and RH100 (50, 75, and 100 energy return) height data were obtained from the NASA Laser Vegetation Imaging Sensor (LVIS), a full waveform light detection and ranging (lidar) instrument. These canopy parameters were used to drive a modified version of the simple GO model (SGM), accurately reproducing patterns ofMISR 672 nm band surface reflectance (mean RMSE 0.011, mean R2 0.82, N 1048). Cover and height maps were obtained through model inversion against MISR 672 nm reflectance estimates on a 250 m grid.The free parameters were tree number density and mean crown radius. RMSE values with respect to reference data for the cover and height retrievals were 0.05 and 6.65 m, respectively, with of 0.54 and 0.49. MISR can thus provide maps of forest cover and height in areas of topographic variation although refinements are required to improve retrieval precision.
Chow, James C.L.; Grigorov, Grigor N.; Yazdani, Nuri
2006-01-01
A custom‐made computer program, SWIMRT, to construct “multileaf collimator (MLC) machine” file for intensity‐modulated radiotherapy (IMRT) fluence maps was developed using MATLAB® and the sliding window algorithm. The user can either import a fluence map with a graphical file format created by an external treatment‐planning system such as Pinnacle3 or create his or her own fluence map using the matrix editor in the program. Through comprehensive calibrations of the dose and the dimension of the imported fluence field, the user can use associated image‐processing tools such as field resizing and edge trimming to modify the imported map. When the processed fluence map is suitable, a “MLC machine” file is generated for our Varian 21 EX linear accelerator with a 120‐leaf Millennium MLC. This machine file is transferred to the MLC console of the LINAC to control the continuous motions of the leaves during beam irradiation. An IMRT field is then irradiated with the 2D intensity profiles, and the irradiated profiles are compared to the imported or modified fluence map. This program was verified and tested using film dosimetry to address the following uncertainties: (1) the mechanical limitation due to the leaf width and maximum traveling speed, and (2) the dosimetric limitation due to the leaf leakage/transmission and penumbra effect. Because the fluence map can be edited, resized, and processed according to the requirement of a study, SWIMRT is essential in studying and investigating the IMRT technique using the sliding window algorithm. Using this program, future work on the algorithm may include redistributing the time space between segmental fields to enhance the fluence resolution, and readjusting the timing of each leaf during delivery to avoid small fields. Possible clinical utilities and examples for SWIMRT are given in this paper. PACS numbers: 87.53.Kn, 87.53.St, 87.53.Uv PMID:17533330
Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica
2017-06-29
The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store operations. For this reason, the island model is more suitable for PGAs than the global and grid model, also in terms of costs when executed on a commercial cloud provider.
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
ReactionMap: an efficient atom-mapping algorithm for chemical reactions.
Fooshee, David; Andronico, Alessio; Baldi, Pierre
2013-11-25
Large databases of chemical reactions provide new data-mining opportunities and challenges. Key challenges result from the imperfect quality of the data and the fact that many of these reactions are not properly balanced or atom-mapped. Here, we describe ReactionMap, an efficient atom-mapping algorithm. Our approach uses a combination of maximum common chemical subgraph search and minimization of an assignment cost function derived empirically from training data. We use a set of over 259,000 balanced atom-mapped reactions from the SPRESI commercial database to train the system, and we validate it on random sets of 1000 and 17,996 reactions sampled from this pool. These large test sets represent a broad range of chemical reaction types, and ReactionMap correctly maps about 99% of the atoms and about 96% of the reactions, with a mean time per mapping of 2 s. Most correctly mapped reactions are mapped with high confidence. Mapping accuracy compares favorably with ChemAxon's AutoMapper, versions 5 and 6.1, and the DREAM Web tool. These approaches correctly map 60.7%, 86.5%, and 90.3% of the reactions, respectively, on the same data set. A ReactionMap server is available on the ChemDB Web portal at http://cdb.ics.uci.edu .
Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.
Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas
2008-01-01
In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.
NASA Astrophysics Data System (ADS)
She, Yuchen; Li, Shuang
2018-01-01
The planning algorithm to calculate a satellite's optimal slew trajectory with a given keep-out constraint is proposed. An energy-optimal formulation is proposed for the Space-based multiband astronomical Variable Objects Monitor Mission Analysis and Planning (MAP) system. The innovative point of the proposed planning algorithm lies in that the satellite structure and control limitation are not considered as optimization constraints but are formulated into the cost function. This modification is able to relieve the burden of the optimizer and increases the optimization efficiency, which is the major challenge for designing the MAP system. Mathematical analysis is given to prove that there is a proportional mapping between the formulation and the satellite controller output. Simulations with different scenarios are given to demonstrate the efficiency of the developed algorithm.
Analysis of a new phase and height algorithm in phase measurement profilometry
NASA Astrophysics Data System (ADS)
Bian, Xintian; Zuo, Fen; Cheng, Ju
2018-04-01
Traditional phase measurement profilometry adopts divergent illumination to obtain the height distribution of a measured object accurately. However, the mapping relation between reference plane coordinates and phase distribution must be calculated before measurement. Data are then stored in a computer in the form of a data sheet for standby applications. This study improved the distribution of projected fringes and deducted the phase-height mapping algorithm when the two pupils of the projection and imaging systems are of unequal heights and when the projection and imaging axes are on different planes. With the algorithm, calculating the mapping relation between reference plane coordinates and phase distribution prior to measurement is unnecessary. Thus, the measurement process is simplified, and the construction of an experimental system is made easy. Computer simulation and experimental results confirm the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.; Barnes, D. C.
2013-01-01
We describe the extension of the recent charge- and energy-conserving one-dimensional electrostatic particle-in-cell algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036] to mapped (body-fitted) computational meshes. The approach maintains exact charge and energy conservation properties. Key to the algorithm is a hybrid push, where particle positions are updated in logical space, while velocities are updated in physical space. The effectiveness of the approach is demonstrated with a challenging numerical test case, the ion acoustic shock wave. The generalization of the approach to multiple dimensions is outlined.
A smoothed two- and three-dimensional interface reconstruction method
Mosso, Stewart; Garasi, Christopher; Drake, Richard
2008-04-22
The Patterned Interface Reconstruction algorithm reduces the discontinuity between material interfaces in neighboring computational elements. This smoothing improves the accuracy of the reconstruction for smooth bodies. The method can be used in two- and three-dimensional Cartesian and unstructured meshes. Planar interfaces will be returned for planar volume fraction distributions. Finally, the algorithm is second-order accurate for smooth volume fraction distributions.
NASA Technical Reports Server (NTRS)
Keihm, S. J.
1983-01-01
When high resolution measurements of the phase variation of the lunar disk center brightness temperature revealed that in situ regolith electrical losses were larger than those measured on returned samples by a factor of 1.5 to 2.0 at centimeter wavelengths, the need for a refinement of the regolith model to include realistic treatment of scattering effects was identified. Two distinct scattering regimes are considered: vertial variations in dielectric constant and volume scattering due to subsurface rock fragments. Models of lunar regolith energy transport processes are now at the state for which a maximum scientific return could be realized from a lunar orbiter microwave mapping experiment. A detailed analysis, including the effects of scattering produced a set of nominal brightness temperature spectra for lunar equatorial regions, which can be used for mapping as a calibration reference for mapping variations in mineralogy and heat flow.
Mapping river bathymetry with a small footprint green LiDAR: Applications and challenges
Kinzel, Paul J.; Legleiter, Carl; Nelson, Jonathan M.
2013-01-01
that environmental conditions and postprocessing algorithms can influence the accuracy and utility of these surveys and must be given consideration. These factors can lead to mapping errors that can have a direct bearing on derivative analyses such as hydraulic modeling and habitat assessment. We discuss the water and substrate characteristics of the sites, compare the conventional and remotely sensed river-bed topographies, and investigate the laser waveforms reflected from submerged targets to provide an evaluation as to the suitability and accuracy of the EAARL system and associated processing algorithms for riverine mapping applications.
Enhancing scattering images for orientation recovery with diffusion map
Winter, Martin; Saalmann, Ulf; Rost, Jan M.
2016-02-12
We explore the possibility for orientation recovery in single-molecule coherent diffractive imaging with diffusion map. This algorithm approximates the Laplace-Beltrami operator, which we diagonalize with a metric that corresponds to the mapping of Euler angles onto scattering images. While suitable for images of objects with specific properties we show why this approach fails for realistic molecules. Here, we introduce a modification of the form factor in the scattering images which facilitates the orientation recovery and should be suitable for all recovery algorithms based on the distance of individual images. (C) 2016 Optical Society of America
A novel algorithm for thermal image encryption.
Hussain, Iqtadar; Anees, Amir; Algarni, Abdulmohsen
2018-04-16
Thermal images play a vital character at nuclear plants, Power stations, Forensic labs biological research, and petroleum products extraction. Safety of thermal images is very important. Image data has some unique features such as intensity, contrast, homogeneity, entropy and correlation among pixels that is why somehow image encryption is trickier as compare to other encryptions. With conventional image encryption schemes it is normally hard to handle these features. Therefore, cryptographers have paid attention to some attractive properties of the chaotic maps such as randomness and sensitivity to build up novel cryptosystems. That is why, recently proposed image encryption techniques progressively more depends on the application of chaotic maps. This paper proposed an image encryption algorithm based on Chebyshev chaotic map and S8 Symmetric group of permutation based substitution boxes. Primarily, parameters of chaotic Chebyshev map are chosen as a secret key to mystify the primary image. Then, the plaintext image is encrypted by the method generated from the substitution boxes and Chebyshev map. By this process, we can get a cipher text image that is perfectly twisted and dispersed. The outcomes of renowned experiments, key sensitivity tests and statistical analysis confirm that the proposed algorithm offers a safe and efficient approach for real-time image encryption.
Fast flow-based algorithm for creating density-equalizing map projections
Gastner, Michael T.; Seguy, Vivien; More, Pratyush
2018-01-01
Cartograms are maps that rescale geographic regions (e.g., countries, districts) such that their areas are proportional to quantitative demographic data (e.g., population size, gross domestic product). Unlike conventional bar or pie charts, cartograms can represent correctly which regions share common borders, resulting in insightful visualizations that can be the basis for further spatial statistical analysis. Computer programs can assist data scientists in preparing cartograms, but developing an algorithm that can quickly transform every coordinate on the map (including points that are not exactly on a border) while generating recognizable images has remained a challenge. Methods that translate the cartographic deformations into physics-inspired equations of motion have become popular, but solving these equations with sufficient accuracy can still take several minutes on current hardware. Here we introduce a flow-based algorithm whose equations of motion are numerically easier to solve compared with previous methods. The equations allow straightforward parallelization so that the calculation takes only a few seconds even for complex and detailed input. Despite the speedup, the proposed algorithm still keeps the advantages of previous techniques: With comparable quantitative measures of shape distortion, it accurately scales all areas, correctly fits the regions together, and generates a map projection for every point. We demonstrate the use of our algorithm with applications to the 2016 US election results, the gross domestic products of Indian states and Chinese provinces, and the spatial distribution of deaths in the London borough of Kensington and Chelsea between 2011 and 2014. PMID:29463721
The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.
2013-07-01
The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysicsmore » simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)« less
EAARL Topography - George Washington Birthplace National Monument 2008
Brock, John C.; Nayegandhi, Amar; Wright, C. Wayne; Stevens, Sara; Yates, Xan
2009-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) and first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the George Washington Birthplace National Monument in Virginia, acquired on March 26, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Coastal Topography - Northern Gulf of Mexico, 2007: First Surface
Smith, Kathryn E.L.; Nayegandhi, Amar; Wright, C. Wayne; Bonisteel, Jamie M.; Brock, John C.
2009-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) elevation data were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. The project provides highly detailed and accurate datasets of select barrier islands and peninsular regions of Louisiana, Mississippi, Alabama, and Florida, acquired June 27-30, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Coastal Topography-Pearl River Delta 2008: Bare Earth
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Miner, Michael D.; Yates, Xan; Bonisteel, Jamie M.
2009-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the University of New Orleans (UNO), Pontchartrain Institute for Environmental Sciences (PIES), New Orleans, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Pearl River Delta in Louisiana and Mississippi, acquired March 9-11, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Coastal Topography-Pearl River Delta 2008: First Surface
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Miner, Michael D.; Michael, D.; Yates, Xan; Bonisteel, Jamie M.
2009-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the University of New Orleans (UNO), Pontchartrain Institute for Environmental Sciences (PIES), New Orleans, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Pearl River Delta in Louisiana and Mississippi, acquired March 9-11, 2008. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Topography - Jean Lafitte National Historical Park and Preserve 2006
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Segura, Martha; Yates, Xan
2008-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) and bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Jean Lafitte National Historical Park and Preserve in Louisiana, acquired on September 22, 2006. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Coastal Topography - Northern Gulf of Mexico, 2007: Bare Earth
Smith, Kathryn E.L.; Nayegandhi, Amar; Wright, C. Wayne; Bonisteel, Jamie M.; Brock, John C.
2009-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. The purpose of this project is to provide highly detailed and accurate datasets of select barrier islands and peninsular regions of Louisiana, Mississippi, Alabama, and Florida, acquired on June 27-30, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Submerged Topography - U.S. Virgin Islands 2003
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.
2008-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived submerged topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), South Florida-Caribbean Network, Miami, FL; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate bathymetric datasets of a portion of the U.S. Virgin Islands, acquired on April 21, 23, and 30, May 2, and June 14 and 17, 2003. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for presurvey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL-B submerged topography: Barnegat Bay, New Jersey, post-Hurricane Sandy, 2012-2013
Wright, C. Wayne; Troche, Rodolfo J.; Kranenburg, Christine J.; Klipp, Emily S.; Fredericks, Xan; Nagle, David B.
2014-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived submerged topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida. This project provides highly detailed and accurate datasets for part of Barnegat Bay, New Jersey, acquired post-Hurricane Sandy on November 1, 5, 16, 20, and 30, 2012; December 5, 6, and 21, 2012; and January 10, 2013. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar system, known as the second-generation Experimental Advanced Airborne Research Lidar (EAARL-B), was used during data acquisition. The EAARL-B system is a raster-scanning, waveform-resolving, green-wavelength (532-nm) lidar designed to map nearshore bathymetry, topography, and vegetation structure simultaneously. The EAARL-B sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, down-looking red-green-blue (RGB) and infrared (IR) digital cameras, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL-B platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL-B system. The resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed originally in a NASA-USGS collaboration. The exploration and processing of lidar data in an interactive or batch mode is supported using ALPS. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. The Airborne Lidar Processing System (ALPS) is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Web site.
Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Fredericks, Xan; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Stevens, Sara
2011-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the National Park Service Southeast Coast Network's Cape Hatteras National Seashore in North Carolina, acquired post-Nor'Ida (November 2009 nor'easter) on November 27 and 29 and December 1, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL coastal topography and imagery–Western Louisiana, post-Hurricane Rita, 2005: First surface
Bonisteel-Cormier, Jamie M.; Wright, Wayne C.; Fredericks, Alexandra M.; Klipp, Emily S.; Nagle, Doug B.; Sallenger, Asbury H.; Brock, John C.
2013-01-01
These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, Virginia. This project provides highly detailed and accurate datasets of a portion of the Louisiana coastline beachface, acquired post-Hurricane Rita on September 27-28 and October 2, 2005. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Website.
EAARL Coastal Topography-Maryland and Delaware, Post-Nor'Ida, 2009
Bonisteel-Cormier, J.M.; Vivekanandan, Saisudha; Nayegandhi, Amar; Sallenger, A.H.; Wright, C.W.; Brock, J.C.; Nagle, D.B.; Klipp, E.S.
2010-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL. This project provides highly detailed and accurate datasets of a portion of the eastern Maryland and Delaware coastline beachface, acquired post-Nor'Ida (November 2009 nor'easter) on November 28 and 30, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Nagle, D.B.; Vivekanandan, Saisudha; Fredericks, Xan
2010-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the eastern Louisiana barrier islands, acquired post-Hurricane Gustav (September 2008 hurricane) on September 6 and 7, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Fredericks, Xan; Stevens, Sara
2010-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the National Park Service Southeast Coast Network's Cape Hatteras National Seashore in North Carolina, acquired post-Nor'Ida (November 2009 nor'easter) on November 27 and 29 and December 1, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
EAARL Coastal Topography-Mississippi and Alabama Barrier Islands, Post-Hurricane Gustav, 2008
Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Nagle, D.B.; Klipp, E.S.; Vivekanandan, Saisudha; Fredericks, Xan; Segura, Martha
2010-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Mississippi and Alabama barrier islands, acquired post-Hurricane Gustav (September 2008 hurricane) on September 8, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Klipp, E.S.; Vivekanandan, Saisudha; Fredericks, Xan; Stevens, Sara
2010-01-01
These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Assateague Island National Seashore in Maryland and Virginia, acquired post-Nor'Ida (November 2009 nor'easter) on November 28 and 30, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar(EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
EAARL Coastal Topography-Fire Island National Seashore, New York, Post-Nor'Ida, 2009
Nayegandhi, Amar; Vivekanandan, Saisudha; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Bonisteel-Cormier, J.M.; Fredericks, Xan; Stevens, Sara
2010-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Fire Island National Seashore in New York, acquired post-Nor'Ida (November 2009 nor'easter) on December 4, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
EAARL coastal topography and imagery-Fire Island National Seashore, New York, 2009
Vivekanandan, Saisudha; Klipp, E.S.; Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Fredericks, Xan; Stevens, Sara
2010-01-01
These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Fire Island National Seashore in New York, acquired on July 9 and August 3, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
EAARL Coastal Topography-Chandeleur Islands, Louisiana, 2010: Bare Earth
Nayegandhi, Amar; Bonisteel-Cormier, Jamie M.; Brock, John C.; Sallenger, A.H.; Wright, C. Wayne; Nagle, David B.; Vivekanandan, Saisudha; Yates, Xan; Klipp, Emily S.
2010-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and submerged topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the Chandeleur Islands, acquired March 3, 2010. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
EAARL Coastal Topography-Eastern Florida, Post-Hurricane Jeanne, 2004: First Surface
Fredericks, Xan; Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Wright, C.W.; Sallenger, A.H.; Brock, J.C.; Klipp, E.S.; Nagle, D.B.
2010-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of a portion of the eastern Florida coastline beachface, acquired post-Hurricane Jeanne (September 2004 hurricane) on October 1, 2004. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
Nayegandhi, Amar; Vivekanandan, Saisudha; Brock, J.C.; Wright, C.W.; Bonisteel-Cormier, J.M.; Nagle, D.B.; Klipp, E.S.; Stevens, Sara
2010-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Sandy Hook Unit of Gateway National Recreation Area in New Jersey, acquired post-Nor'Ida (November 2009 nor'easter) on December 4, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
Nagle, David B.; Nayegandhi, Amar; Yates, Xan; Brock, John C.; Wright, C. Wayne; Bonisteel, Jamie M.; Klipp, Emily S.; Segura, Martha
2010-01-01
These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) topography, first-surface (FS) topography, and canopy-height (CH) datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Science Center, St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Naval Live Oaks Area in Florida's Gulf Islands National Seashore, acquired June 30, 2007. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
EAARL Coastal Topography - Fire Island National Seashore 2007
Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Bonisteel, Jamie M.
2008-01-01
These remotely sensed, geographically referenced elevation measurements of Lidar-derived first surface (FS) and bare earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of Fire Island National Seashore in New York, acquired on April 29-30 and May 15-16, 2007. The datasets are made available for use as a management tool to research scientists and natural resource managers. An innovative airborne Lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) Lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive Lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers and an integrated miniature digital inertial measurement unit, which provide for submeter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a Lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of Lidar data in an interactive or batch mode. Modules for pre-survey flight line definition, flight path plotting, Lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is routinely used to create maps that represent submerged or first surface topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Coastal Topography-Assateague Island National Seashore, 2008: Bare Earth
Bonisteel, Jamie M.; Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Klipp, Emily S.
2009-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived bare-earth (BE) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Assateague Island National Seashore in Maryland and Virginia, acquired March 24-25, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL) was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for pre-survey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
EAARL Coastal Topography-Assateague Island National Seashore, 2008: First Surface
Bonisteel, Jamie M.; Nayegandhi, Amar; Brock, John C.; Wright, C. Wayne; Stevens, Sara; Yates, Xan; Klipp, Emily S.
2009-01-01
These remotely sensed, geographically referenced elevation measurements of lidar-derived first-surface (FS) topography were produced as a collaborative effort between the U.S. Geological Survey (USGS), Florida Integrated Science Center (FISC), St. Petersburg, FL; the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Assateague Island National Seashore in Maryland and Virginia, acquired March 24-25, 2008. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multi-spectral color infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for pre-survey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations.
Improvements on a privacy-protection algorithm for DNA sequences with generalization lattices.
Li, Guang; Wang, Yadong; Su, Xiaohong
2012-10-01
When developing personal DNA databases, there must be an appropriate guarantee of anonymity, which means that the data cannot be related back to individuals. DNA lattice anonymization (DNALA) is a successful method for making personal DNA sequences anonymous. However, it uses time-consuming multiple sequence alignment and a low-accuracy greedy clustering algorithm. Furthermore, DNALA is not an online algorithm, and so it cannot quickly return results when the database is updated. This study improves the DNALA method. Specifically, we replaced the multiple sequence alignment in DNALA with global pairwise sequence alignment to save time, and we designed a hybrid clustering algorithm comprised of a maximum weight matching (MWM)-based algorithm and an online algorithm. The MWM-based algorithm is more accurate than the greedy algorithm in DNALA and has the same time complexity. The online algorithm can process data quickly when the database is updated. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Algorithm theoretical basis for GEDI level-4A footprint above ground biomass density.
NASA Astrophysics Data System (ADS)
Kellner, J. R.; Armston, J.; Blair, J. B.; Duncanson, L.; Hancock, S.; Hofton, M. A.; Luthcke, S. B.; Marselis, S.; Tang, H.; Dubayah, R.
2017-12-01
The Global Ecosystem Dynamics Investigation is a NASA Earth-Venture-2 mission that will place a multi-beam waveform lidar instrument on the International Space Station. GEDI data will provide globally representative measurements of vertical height profiles (waveforms) and estimates of above ground carbon stocks throughout the planet's temperate and tropical regions. Here we describe the current algorithm theoretical basis for the L4A footprint above ground biomass data product. The L4A data product is above ground biomass density (AGBD, Mg · ha-1) at the scale of individual GEDI footprints (25 m diameter). Footprint AGBD is derived from statistical models that relate waveform height metrics to field-estimated above ground biomass. The field estimates are from long-term permanent plot inventories in which all free-standing woody plants greater than a diameter size threshold have been identified and mapped. We simulated GEDI waveforms from discrete-return airborne lidar data using the GEDI waveform simulator. We associated height metrics from simulated waveforms with field-estimated AGBD at 61 sites in temperate and tropical regions of North and South America, Europe, Africa, Asia and Australia. We evaluated the ability of empirical and physically-based regression and machine learning models to predict AGBD at the footprint level. Our analysis benchmarks the performance of these models in terms of site and region-specific accuracy and transferability using a globally comprehensive calibration and validation dataset.
Computer-Guided Deep Brain Stimulation Programming for Parkinson's Disease.
Heldman, Dustin A; Pulliam, Christopher L; Urrea Mendoza, Enrique; Gartner, Maureen; Giuffrida, Joseph P; Montgomery, Erwin B; Espay, Alberto J; Revilla, Fredy J
2016-02-01
Pilot study to evaluate computer-guided deep brain stimulation (DBS) programming designed to optimize stimulation settings using objective motion sensor-based motor assessments. Seven subjects (five males; 54-71 years) with Parkinson's disease (PD) and recently implanted DBS systems participated in this pilot study. Within two months of lead implantation, the subject returned to the clinic to undergo computer-guided programming and parameter selection. A motion sensor was placed on the index finger of the more affected hand. Software guided a monopolar survey during which monopolar stimulation on each contact was iteratively increased followed by an automated assessment of tremor and bradykinesia. After completing assessments at each setting, a software algorithm determined stimulation settings designed to minimize symptom severities, side effects, and battery usage. Optimal DBS settings were chosen based on average severity of motor symptoms measured by the motion sensor. Settings chosen by the software algorithm identified a therapeutic window and improved tremor and bradykinesia by an average of 35.7% compared with baseline in the "off" state (p < 0.01). Motion sensor-based computer-guided DBS programming identified stimulation parameters that significantly improved tremor and bradykinesia with minimal clinician involvement. Automated motion sensor-based mapping is worthy of further investigation and may one day serve to extend programming to populations without access to specialized DBS centers. © 2015 International Neuromodulation Society.
Interval data clustering using self-organizing maps based on adaptive Mahalanobis distances.
Hajjar, Chantal; Hamdan, Hani
2013-10-01
The self-organizing map is a kind of artificial neural network used to map high dimensional data into a low dimensional space. This paper presents a self-organizing map for interval-valued data based on adaptive Mahalanobis distances in order to do clustering of interval data with topology preservation. Two methods based on the batch training algorithm for the self-organizing maps are proposed. The first method uses a common Mahalanobis distance for all clusters. In the second method, the algorithm starts with a common Mahalanobis distance per cluster and then switches to use a different distance per cluster. This process allows a more adapted clustering for the given data set. The performances of the proposed methods are compared and discussed using artificial and real interval data sets. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad
2016-01-01
In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.
Intelligent Real-Time Problem Solving
1990-01-01
uncertainty, its preferences, and its attitude towards risk . 14 Control Theory This paradigm has culminated in a mature academic discipline that has produced a...systems offer the obvious advantages of evaluating against reality, but they are often cumbersome or even unavailable, may pose unacceptable risks , etc. In... return an answer. For the sorts of anytime algorithms wc are interested in, one can determine the expected utility of the answers returned by the
Prediction model for the return to work of workers with injuries in Hong Kong.
Xu, Yanwen; Chan, Chetwyn C H; Lo, Karen Hui Yu-Ling; Tang, Dan
2008-01-01
This study attempts to formulate a prediction model of return to work for a group of workers who have been suffering from chronic pain and physical injury while also being out of work in Hong Kong. The study used Case-based Reasoning (CBR) method, and compared the result with the statistical method of logistic regression model. The database of the algorithm of CBR was composed of 67 cases who were also used in the logistic regression model. The testing cases were 32 participants who had a similar background and characteristics to those in the database. The methods of setting constraints and Euclidean distance metric were used in CBR to search the closest cases to the trial case based on the matrix. The usefulness of the algorithm was tested on 32 new participants, and the accuracy of predicting return to work outcomes was 62.5%, which was no better than the 71.2% accuracy derived from the logistic regression model. The results of the study would enable us to have a better understanding of the CBR applied in the field of occupational rehabilitation by comparing with the conventional regression analysis. The findings would also shed light on the development of relevant interventions for the return-to-work process of these workers.
Wang, Jiexin; Uchibe, Eiji; Doya, Kenji
2017-01-01
EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate. PMID:28167910
Understanding Measurements Returned by the Helioseismic and Magnetic Imager
NASA Astrophysics Data System (ADS)
Cohen, Daniel Parke; Criscuoli, Serena
2014-06-01
The Helioseismic and Magnetic Imager (HMI) aboard the Solar Dynamics Observatory (SDO) observes the Sun at the FeI 6173 Å line and returns full disk maps of line-of-sight observables including the magnetic field flux, FeI line width, line depth, and continuum intensity. To properly interpret such data it is important to understand any issues with the HMI and the pipeline that produces these observables. At this aim, HMI data were analyzed at both daily intervals for a span of 3 years at disk center in the quiet Sun and hourly intervals for a span of 200 hours around an active region. Systematic effects attributed to issues with instrument adjustments and re-calibrations, variations in the transmission filters and the orbital velocities of the SDO were found while the actual physical evolutions of such observables were difficult to determine. Velocities and magnetic flux measurements are less affected, as the aforementioned effects are partially compensated for by the HMI algorithm; the other observables are instead affected by larger uncertainties. In order to model these uncertainties, the HMI pipeline was tested with synthetic spectra generated through various 1D atmosphere models with radiative transfer code (the RH code). It was found that HMI estimates of line width, line depth, and continuum intensity are highly dependent on the shape of the line, and therefore highly dependent on the line-of-sight angle and the magnetic field associated to the model. The best estimates are found for Quiet regions at disk center, for which the relative differences between theoretical and HMI algorithm values are 6-8% for line width, 10-15% for line depth, and 0.1-0.2% for continuum intensity. In general, the relative difference between theoretical values and HMI estimates increases toward the limb and with the increase of the field; the HMI algorithm seems to fail in regions with fields larger than ~2000 G. This work is carried out through the National Solar Observatory Research Experiences for Undergraduate (REU) site program, which is co-funded by the Department of Defense in partnership with the NSF REU Program. The National Solar Observatory is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation.
Symmetric encryption algorithms using chaotic and non-chaotic generators: A review
Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.
2015-01-01
This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561
NASA Astrophysics Data System (ADS)
Sullivan, J. T.; McGee, T. J.; Leblanc, T.; Sumnicht, G. K.; Twigg, L. W.
2015-04-01
The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL) is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical and aloft ozone concentrations, especially during air quality episodes. To better characterize tropospheric ozone, the Tropospheric Ozone Lidar Network (TOLNet) has recently been developed, which currently consists of five different ozone DIAL instruments, including the TROPOZ. This paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and develops a primary standard for retrieval consistency and optimization within TOLNet. This paper is focused on ensuring the TROPOZ and future TOLNet algorithms are properly quantifying ozone concentrations and the following paper will focus on defining a systematic uncertainty analysis standard for all TOLNet instruments. Although this paper is used to optimize the TROPOZ retrieval, the methodology presented may be extended and applied to most other DIAL instruments, even if the atmospheric product of interest is not tropospheric ozone (e.g. temperature or water vapor). The analysis begins by computing synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile, thereby identifying any areas that may need refinement for a new operational version of the TROPOZ retrieval algorithm. A new vertical resolution scheme is presented, which was upgraded from a constant vertical resolution to a variable vertical resolution, in order to yield a statistical uncertainty of <10%. The optimized vertical resolution scheme retains the ability to resolve fluctuations in the known ozone profile and now allows near field signals to be more appropriately smoothed. With these revisions, the optimized TROPOZ retrieval algorithm (TROPOZopt) has been effective in retrieving nearly 200 m lower to the surface. Also, as compared to the previous version of the retrieval, the TROPOZopt has reduced the mean profile bias by 3.5% and large reductions in bias (near 15 %) were apparent above 4.5 km. Finally, to ensure the TROPOZopt retrieval algorithm is robust enough to handle actual lidar return signals, a comparison is shown between four nearby ozonesonde measurements. The ozonesondes agree well with the retrieval and are mostly within the TROPOZopt retrieval uncertainty bars (which implies that this exercise was quite successful). A final mean percent difference plot is shown between the TROPOZopt and ozonesondes, which indicates that the new operational retrieval is mostly within 10% of the ozonesonde measurement and no systematic biases are present. The authors believe that this analysis has significantly added to the confidence in the TROPOZ instrument and provides a standard for current and future TOLNet algorithms.
NASA Astrophysics Data System (ADS)
Wu, Guangyuan; Niu, Shijun; Li, Xiaozhou; Hu, Guichun
2018-04-01
Due to the increasing globalization of printing industry, remoting proofing will become the inevitable development trend. Cross-media color reproduction will occur in different color gamuts using remote proofing technologies, which usually leads to the problem of incompatible color gamut. In this paper, to achieve equivalent color reproduction between a monitor and a printer, a frequency-based spatial gamut mapping algorithm is proposed for decreasing the loss of visual color information. The design of algorithm is based on the contrast sensitivity functions (CSF), which exploited CSF spatial filter to preserve luminance of the high spatial frequencies and chrominance of the low frequencies. First we show a general framework for how to apply CSF spatial filter in retention of relevant visual information. Then we compare the proposed framework with HPMINDE, CUSP, Bala's algorithm. The psychophysical experimental results indicated the good performance of the proposed algorithm.
Research on Image Encryption Based on DNA Sequence and Chaos Theory
NASA Astrophysics Data System (ADS)
Tian Zhang, Tian; Yan, Shan Jun; Gu, Cheng Yan; Ren, Ran; Liao, Kai Xin
2018-04-01
Nowadays encryption is a common technique to protect image data from unauthorized access. In recent years, many scientists have proposed various encryption algorithms based on DNA sequence to provide a new idea for the design of image encryption algorithm. Therefore, a new method of image encryption based on DNA computing technology is proposed in this paper, whose original image is encrypted by DNA coding and 1-D logistic chaotic mapping. First, the algorithm uses two modules as the encryption key. The first module uses the real DNA sequence, and the second module is made by one-dimensional logistic chaos mapping. Secondly, the algorithm uses DNA complementary rules to encode original image, and uses the key and DNA computing technology to compute each pixel value of the original image, so as to realize the encryption of the whole image. Simulation results show that the algorithm has good encryption effect and security.
Joint demosaicking and zooming using moderate spectral correlation and consistent edge map
NASA Astrophysics Data System (ADS)
Zhou, Dengwen; Dong, Weiming; Chen, Wengang
2014-07-01
The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.
Real-time stereo matching using orthogonal reliability-based dynamic programming.
Gong, Minglun; Yang, Yee-Hong
2007-03-01
A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.
Development of an Algorithm for Satellite Remote Sensing of Sea and Lake Ice
NASA Astrophysics Data System (ADS)
Dorofy, Peter T.
Satellite remote sensing of snow and ice has a long history. The traditional method for many snow and ice detection algorithms has been the use of the Normalized Difference Snow Index (NDSI). This manuscript is composed of two parts. Chapter 1, Development of a Mid-Infrared Sea and Lake Ice Index (MISI) using the GOES Imager, discusses the desirability, development, and implementation of alternative index for an ice detection algorithm, application of the algorithm to the detection of lake ice, and qualitative validation against other ice mapping products; such as, the Ice Mapping System (IMS). Chapter 2, Application of Dynamic Threshold in a Lake Ice Detection Algorithm, continues with a discussion of the development of a method that considers the variable viewing and illumination geometry of observations throughout the day. The method is an alternative to Bidirectional Reflectance Distribution Function (BRDF) models. Evaluation of the performance of the algorithm is introduced by aggregating classified pixels within geometrical boundaries designated by IMS and obtaining sensitivity and specificity statistical measures.
Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar
Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu
2015-01-01
Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
Multiresolution saliency map based object segmentation
NASA Astrophysics Data System (ADS)
Yang, Jian; Wang, Xin; Dai, ZhenYou
2015-11-01
Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.
Static vs. dynamic decoding algorithms in a non-invasive body-machine interface
Seáñez-González, Ismael; Pierella, Camilla; Farshchiansadegh, Ali; Thorp, Elias B.; Abdollahi, Farnaz; Pedersen, Jessica; Mussa-Ivaldi, Ferdinando A.
2017-01-01
In this study, we consider a non-invasive body-machine interface that captures body motions still available to people with spinal cord injury (SCI) and maps them into a set of signals for controlling a computer user interface while engaging in a sustained level of mobility and exercise. We compare the effectiveness of two decoding algorithms that transform a high-dimensional body-signal vector into a lower dimensional control vector on 6 subjects with high-level SCI and 8 controls. One algorithm is based on a static map from current body signals to the current value of the control vector set through principal component analysis (PCA), the other on dynamic mapping a segment of body signals to the value and the temporal derivatives of the control vector set through a Kalman filter. SCI and control participants performed straighter and smoother cursor movements with the Kalman algorithm during center-out reaching, but their movements were faster and more precise when using PCA. All participants were able to use the BMI’s continuous, two-dimensional control to type on a virtual keyboard and play pong, and performance with both algorithms was comparable. However, seven of eight control participants preferred PCA as their method of virtual wheelchair control. The unsupervised PCA algorithm was easier to train and seemed sufficient to achieve a higher degree of learnability and perceived ease of use. PMID:28092564
A Secure Alignment Algorithm for Mapping Short Reads to Human Genome.
Zhao, Yongan; Wang, Xiaofeng; Tang, Haixu
2018-05-09
The elastic and inexpensive computing resources such as clouds have been recognized as a useful solution to analyzing massive human genomic data (e.g., acquired by using next-generation sequencers) in biomedical researches. However, outsourcing human genome computation to public or commercial clouds was hindered due to privacy concerns: even a small number of human genome sequences contain sufficient information for identifying the donor of the genomic data. This issue cannot be directly addressed by existing security and cryptographic techniques (such as homomorphic encryption), because they are too heavyweight to carry out practical genome computation tasks on massive data. In this article, we present a secure algorithm to accomplish the read mapping, one of the most basic tasks in human genomic data analysis based on a hybrid cloud computing model. Comparing with the existing approaches, our algorithm delegates most computation to the public cloud, while only performing encryption and decryption on the private cloud, and thus makes the maximum use of the computing resource of the public cloud. Furthermore, our algorithm reports similar results as the nonsecure read mapping algorithms, including the alignment between reads and the reference genome, which can be directly used in the downstream analysis such as the inference of genomic variations. We implemented the algorithm in C++ and Python on a hybrid cloud system, in which the public cloud uses an Apache Spark system.
NASA Astrophysics Data System (ADS)
Osinski, Gordon R.; Lee, Pascal; Cockell, Charles S.; Snook, Kelly; Lim, Darlene S. S.; Braham, Stephen
2010-03-01
With the prospect of humans returning to Moon by the end of the next decade, considerable attention is being paid to technologies required to transport astronauts to the lunar surface and then to be able to carry out surface science. Recent and ongoing initiatives have focused on scientific questions to be asked. In contrast, few studies have addressed how these scientific priorities will be achieved. In this contribution, we provide some of the lessons learned from the exploration of the Haughton impact structure, an ideal lunar analogue site in the Canadian Arctic. Essentially, by studying how geologists carry out field science, we can provide guidelines for lunar surface operations. Our goal in this contribution is to inform the engineers and managers involved in mission planning, rather than the field geology community. Our results show that the exploration of the Haughton impact structure can be broken down into 3 distinct phases: (1) reconnaissance; (2) systematic regional-scale mapping and sampling; and (3) detailed local-scale mapping and sampling. This break down is similar to the classic scientific method practiced by field geologists of regional exploratory mapping followed by directed mapping at a local scale, except that we distinguish between two different phases of exploratory mapping. Our data show that the number of stops versus the number of samples collected versus the amount of data collected varied depending on the mission phase, as does the total distance covered per EVA. Thus, operational scenarios could take these differences into account, depending on the goals and duration of the mission. Important lessons learned include the need for flexibility in mission planning in order to account for serendipitous discoveries, the highlighting of key "science supersites" that may require return visits, the need for a rugged but simple human-operated rover, laboratory space in the habitat, and adequate room for returned samples, both in the habitat and in the return vehicle. The proposed set of recommendations ideally should be tried and tested in future analogue missions at terrestrial impact sites prior to planetary missions.
Li, Haisen S; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S; Chetty, Indrin J
2014-01-06
The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.
NASA Astrophysics Data System (ADS)
Li, Haisen S.; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S.; Chetty, Indrin J.
2014-01-01
The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.
NASA Astrophysics Data System (ADS)
Debats, Stephanie Renee
Smallholder farms dominate in many parts of the world, including Sub-Saharan Africa. These systems are characterized by small, heterogeneous, and often indistinct field patterns, requiring a specialized methodology to map agricultural landcover. In this thesis, we developed a benchmark labeled data set of high-resolution satellite imagery of agricultural fields in South Africa. We presented a new approach to mapping agricultural fields, based on efficient extraction of a vast set of simple, highly correlated, and interdependent features, followed by a random forest classifier. The algorithm achieved similar high performance across agricultural types, including spectrally indistinct smallholder fields, and demonstrated the ability to generalize across large geographic areas. In sensitivity analyses, we determined multi-temporal images provided greater performance gains than the addition of multi-spectral bands. We also demonstrated how active learning can be incorporated in the algorithm to create smaller, more efficient training data sets, which reduced computational resources, minimized the need for humans to hand-label data, and boosted performance. We designed a patch-based uncertainty metric to drive the active learning framework, based on the regular grid of a crowdsourcing platform, and demonstrated how subject matter experts can be replaced with fleets of crowdsourcing workers. Our active learning algorithm achieved similar performance as an algorithm trained with randomly selected data, but with 62% less data samples. This thesis furthers the goal of providing accurate agricultural landcover maps, at a scale that is relevant for the dominant smallholder class. Accurate maps are crucial for monitoring and promoting agricultural production. Furthermore, improved agricultural landcover maps will aid a host of other applications, including landcover change assessments, cadastral surveys to strengthen smallholder land rights, and constraints for crop modeling and famine prediction.
2015-09-24
This cylindrical projection map of Pluto, in enhanced, extended color, is the most detailed color map of Pluto ever made by NASA New Horizons. It uses recently returned color imagery from the New Horizons Ralph camera, which is draped onto a base map of images from the NASA's spacecraft's Long Range Reconnaissance Imager (LORRI). The map can be zoomed in to reveal exquisite detail with high scientific value. Color variations have been enhanced to bring out subtle differences. Colors used in this map are the blue, red, and near-infrared filter channels of the Ralph instrument. http://photojournal.jpl.nasa.gov/catalog/PIA19956
Algorithms for optimization of branching gravity-driven water networks
NASA Astrophysics Data System (ADS)
Dardani, Ian; Jones, Gerard F.
2018-05-01
The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.
Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Kokaly, Raymond F.; Sutley, Steve J.; Dalton, J. Brad; McDougal, Robert R.; Gent, Carol A.
2003-01-01
Imaging spectroscopy is a tool that can be used to spectrally identify and spatially map materials based on their specific chemical bonds. Spectroscopic analysis requires significantly more sophistication than has been employed in conventional broadband remote sensing analysis. We describe a new system that is effective at material identification and mapping: a set of algorithms within an expert system decision‐making framework that we call Tetracorder. The expertise in the system has been derived from scientific knowledge of spectral identification. The expert system rules are implemented in a decision tree where multiple algorithms are applied to spectral analysis, additional expert rules and algorithms can be applied based on initial results, and more decisions are made until spectral analysis is complete. Because certain spectral features are indicative of specific chemical bonds in materials, the system can accurately identify and map those materials. In this paper we describe the framework of the decision making process used for spectral identification, describe specific spectral feature analysis algorithms, and give examples of what analyses and types of maps are possible with imaging spectroscopy data. We also present the expert system rules that describe which diagnostic spectral features are used in the decision making process for a set of spectra of minerals and other common materials. We demonstrate the applications of Tetracorder to identify and map surface minerals, to detect sources of acid rock drainage, and to map vegetation species, ice, melting snow, water, and water pollution, all with one set of expert system rules. Mineral mapping can aid in geologic mapping and fault detection and can provide a better understanding of weathering, mineralization, hydrothermal alteration, and other geologic processes. Environmental site assessment, such as mapping source areas of acid mine drainage, has resulted in the acceleration of site cleanup, saving millions of dollars and years in cleanup time. Imaging spectroscopy data and Tetracorder analysis can be used to study both terrestrial and planetary science problems. Imaging spectroscopy can be used to probe planetary systems, including their atmospheres, oceans, and land surfaces.
Bravyi-Kitaev Superfast simulation of electronic structure on a quantum computer.
Setia, Kanav; Whitfield, James D
2018-04-28
Present quantum computers often work with distinguishable qubits as their computational units. In order to simulate indistinguishable fermionic particles, it is first required to map the fermionic state to the state of the qubits. The Bravyi-Kitaev Superfast (BKSF) algorithm can be used to accomplish this mapping. The BKSF mapping has connections to quantum error correction and opens the door to new ways of understanding fermionic simulation in a topological context. Here, we present the first detailed exposition of the BKSF algorithm for molecular simulation. We provide the BKSF transformed qubit operators and report on our implementation of the BKSF fermion-to-qubits transform in OpenFermion. In this initial study of a hydrogen molecule we have compared BKSF, Jordan-Wigner, and Bravyi-Kitaev transforms under the Trotter approximation. The gate count to implement BKSF is lower than Jordan-Wigner but higher than Bravyi-Kitaev. We considered different orderings of the exponentiated terms and found lower Trotter errors than the previously reported for Jordan-Wigner and Bravyi-Kitaev algorithms. These results open the door to the further study of the BKSF algorithm for quantum simulation.