A fast 3D region growing approach for CT angiography applications
NASA Astrophysics Data System (ADS)
Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang
2004-05-01
Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.
Double regions growing algorithm for automated satellite image mosaicking
NASA Astrophysics Data System (ADS)
Tan, Yihua; Chen, Chen; Tian, Jinwen
2011-12-01
Feathering is a most widely used method in seamless satellite image mosaicking. A simple but effective algorithm - double regions growing (DRG) algorithm, which utilizes the shape content of images' valid regions, is proposed for generating robust feathering-line before feathering. It works without any human intervention, and experiment on real satellite images shows the advantages of the proposed method.
Segmentation of remotely sensed data using parallel region growing
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Cox, S. C.
1983-01-01
The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.
NASA Astrophysics Data System (ADS)
Selva Bhuvaneswari, K.; Geetha, P.
2017-05-01
Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
Flood inundation extent mapping based on block compressed tracing
NASA Astrophysics Data System (ADS)
Shen, Dingtao; Rui, Yikang; Wang, Jiechen; Zhang, Yu; Cheng, Liang
2015-07-01
Flood inundation extent, depth, and duration are important factors affecting flood hazard evaluation. At present, flood inundation analysis is based mainly on a seeded region-growing algorithm, which is an inefficient process because it requires excessive recursive computations and it is incapable of processing massive datasets. To address this problem, we propose a block compressed tracing algorithm for mapping the flood inundation extent, which reads the DEM data in blocks before transferring them to raster compression storage. This allows a smaller computer memory to process a larger amount of data, which solves the problem of the regular seeded region-growing algorithm. In addition, the use of a raster boundary tracing technique allows the algorithm to avoid the time-consuming computations required by the seeded region-growing. Finally, we conduct a comparative evaluation in the Chin-sha River basin, results show that the proposed method solves the problem of flood inundation extent mapping based on massive DEM datasets with higher computational efficiency than the original method, which makes it suitable for practical applications.
NASA Astrophysics Data System (ADS)
Aziz, Aamer; Hu, Qingmao; Nowinski, Wieslaw L.
2004-04-01
The human cerebral ventricular system is a complex structure that is essential for the well being and changes in which reflect disease. It is clinically imperative that the ventricular system be studied in details. For this reason computer assisted algorithms are essential to be developed. We have developed a novel (patent pending) and robust anatomical knowledge-driven algorithm for automatic extraction of the cerebral ventricular system from MRI. The algorithm is not only unique in its image processing aspect but also incorporates knowledge of neuroanatomy, radiological properties, and variability of the ventricular system. The ventricular system is divided into six 3D regions based on the anatomy and its variability. Within each ventricular region a 2D region of interest (ROI) is defined and is then further subdivided into sub-regions. Various strict conditions that detect and prevent leakage into the extra-ventricular space are specified for each sub-region based on anatomical knowledge. Each ROI is processed to calculate its local statistics, local intensity ranges of cerebrospinal fluid and grey and white matters, set a seed point within the ROI, grow region directionally in 3D, check anti-leakage conditions and correct growing if leakage occurs and connects all unconnected regions grown by relaxing growing conditions. The algorithm was tested qualitatively and quantitatively on normal and pathological MRI cases and worked well. In this paper we discuss in more detail inclusion of anatomical knowledge in the algorithm and usefulness of our approach from clinical perspective.
Vision-based posture recognition using an ensemble classifier and a vote filter
NASA Astrophysics Data System (ADS)
Ji, Peng; Wu, Changcheng; Xu, Xiaonong; Song, Aiguo; Li, Huijun
2016-10-01
Posture recognition is a very important Human-Robot Interaction (HRI) way. To segment effective posture from an image, we propose an improved region grow algorithm which combining with the Single Gauss Color Model. The experiment shows that the improved region grow algorithm can get the complete and accurate posture than traditional Single Gauss Model and region grow algorithm, and it can eliminate the similar region from the background at the same time. In the posture recognition part, and in order to improve the recognition rate, we propose a CNN ensemble classifier, and in order to reduce the misjudgments during a continuous gesture control, a vote filter is proposed and applied to the sequence of recognition results. Comparing with CNN classifier, the CNN ensemble classifier we proposed can yield a 96.27% recognition rate, which is better than that of CNN classifier, and the proposed vote filter can improve the recognition result and reduce the misjudgments during the consecutive gesture switch.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dise, J; Liang, X; Lin, L
Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions frommore » day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic digitization tool was shown to be accurate compared to manual digitization.« less
Parallelized seeded region growing using CUDA.
Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung
2014-01-01
This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.
Algorithms and programming tools for image processing on the MPP:3
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1987-01-01
This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.
Parallelized Seeded Region Growing Using CUDA
Park, Seongjin; Lee, Hyunna; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung
2014-01-01
This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests. PMID:25309619
Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2011-01-01
A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.
A kind of color image segmentation algorithm based on super-pixel and PCNN
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.
Segmentation of bone and soft tissue regions in digital radiographic images of extremities
NASA Astrophysics Data System (ADS)
Pakin, S. Kubilay; Gaborski, Roger S.; Barski, Lori L.; Foos, David H.; Parker, Kevin J.
2001-07-01
This paper presents an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is a region-based one in which the regions are constructed using a growing procedure with two different statistical tests. Following the growing process, tissue classification procedure is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between local and global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the classification step, resulting bone and soft tissue regions are refined by fitting a 2nd order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as gold standard. The experiments showed that our algorithm provided a bone boundary with an average area overlap of 90% compared to the gold standard.
Student beats the teacher: deep neural networks for lateral ventricles segmentation in brain MR
NASA Astrophysics Data System (ADS)
Ghafoorian, Mohsen; Teuwen, Jonas; Manniesing, Rashindra; Leeuw, Frank-Erik d.; van Ginneken, Bram; Karssemeijer, Nico; Platel, Bram
2018-03-01
Ventricular volume and its progression are known to be linked to several brain diseases such as dementia and schizophrenia. Therefore accurate measurement of ventricle volume is vital for longitudinal studies on these disorders, making automated ventricle segmentation algorithms desirable. In the past few years, deep neural networks have shown to outperform the classical models in many imaging domains. However, the success of deep networks is dependent on manually labeled data sets, which are expensive to acquire especially for higher dimensional data in the medical domain. In this work, we show that deep neural networks can be trained on muchcheaper-to-acquire pseudo-labels (e.g., generated by other automated less accurate methods) and still produce more accurate segmentations compared to the quality of the labels. To show this, we use noisy segmentation labels generated by a conventional region growing algorithm to train a deep network for lateral ventricle segmentation. Then on a large manually annotated test set, we show that the network significantly outperforms the conventional region growing algorithm which was used to produce the training labels for the network. Our experiments report a Dice Similarity Coefficient (DSC) of 0.874 for the trained network compared to 0.754 for the conventional region growing algorithm (p < 0.001).
Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software
NASA Technical Reports Server (NTRS)
Tilton, James C.
2003-01-01
A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.
Study of robot landmark recognition with complex background
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Yang, Jia
2007-12-01
It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.
Gao, Wei-Wei; Shen, Jian-Xin; Wang, Yu-Liang; Liang, Chun; Zuo, Jing
2013-02-01
In order to automatically detect hemorrhages in fundus images, and develop an automated diabetic retinopathy screening system, a novel algorithm named locally adaptive region growing based on multi-template matching was established and studied. Firstly, spectral signature of major anatomical structures in fundus was studied, so that the right channel among RGB channels could be selected for different segmentation objects. Secondly, the fundus image was preprocessed by means of HSV brightness correction and contrast limited adaptive histogram equalization (CLAHE). Then, seeds of region growing were founded out by removing optic disc and vessel from the resulting image of normalized cross-correlation (NCC) template matching on the previous preprocessed image with several templates. Finally, locally adaptive region growing segmentation was used to find out the exact contours of hemorrhages, and the automated detection of the lesions was accomplished. The approach was tested on 90 different resolution fundus images with variable color, brightness and quality. Results suggest that the approach could fast and effectively detect hemorrhages in fundus images, and it is stable and robust. As a result, the approach can meet the clinical demands.
NASA Astrophysics Data System (ADS)
B. Shokouhi, Shahriar; Fooladivanda, Aida; Ahmadinejad, Nasrin
2017-12-01
A computer-aided detection (CAD) system is introduced in this paper for detection of breast lesions in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The proposed CAD system firstly compensates motion artifacts and segments the breast region. Then, the potential lesion voxels are detected and used as the initial seed points for the seeded region-growing algorithm. A new and robust region-growing algorithm incorporating with Fuzzy C-means (FCM) clustering and vesselness filter is proposed to segment any potential lesion regions. Subsequently, the false positive detections are reduced by applying a discrimination step. This is based on 3D morphological characteristics of the potential lesion regions and kinetic features which are fed to the support vector machine (SVM) classifier. The performance of the proposed CAD system is evaluated using the free-response operating characteristic (FROC) curve. We introduce our collected dataset that includes 76 DCE-MRI studies, 63 malignant and 107 benign lesions. The prepared dataset has been used to verify the accuracy of the proposed CAD system. At 5.29 false positives per case, the CAD system accurately detects 94% of the breast lesions.
Towards Automatic Image Segmentation Using Optimised Region Growing Technique
NASA Astrophysics Data System (ADS)
Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi
Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.
Road extraction from aerial images using a region competition algorithm.
Amo, Miriam; Martínez, Fernando; Torre, Margarita
2006-05-01
In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.
Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.
2018-04-01
A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.
Validation tools for image segmentation
NASA Astrophysics Data System (ADS)
Padfield, Dirk; Ross, James
2009-02-01
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
NASA Astrophysics Data System (ADS)
Jia, Duo; Wang, Cangjiao; Lei, Shaogang
2018-01-01
Mapping vegetation dynamic types in mining areas is significant for revealing the mechanisms of environmental damage and for guiding ecological construction. Dynamic types of vegetation can be identified by applying interannual normalized difference vegetation index (NDVI) time series. However, phase differences and time shifts in interannual time series decrease mapping accuracy in mining regions. To overcome these problems and to increase the accuracy of mapping vegetation dynamics, an interannual Landsat time series for optimum vegetation growing status was constructed first by using the enhanced spatial and temporal adaptive reflectance fusion model algorithm. We then proposed a Markov random field optimized semisupervised Gaussian dynamic time warping kernel-based fuzzy c-means (FCM) cluster algorithm for interannual NDVI time series to map dynamic vegetation types in mining regions. The proposed algorithm has been tested in the Shengli mining region and Shendong mining region, which are typical representatives of China's open-pit and underground mining regions, respectively. Experiments show that the proposed algorithm can solve the problems of phase differences and time shifts to achieve better performance when mapping vegetation dynamic types. The overall accuracies for the Shengli and Shendong mining regions were 93.32% and 89.60%, respectively, with improvements of 7.32% and 25.84% when compared with the original semisupervised FCM algorithm.
Face detection and eyeglasses detection for thermal face recognition
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
2012-01-01
Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.
Reliable two-dimensional phase unwrapping method using region growing and local linear estimation.
Zhou, Kun; Zaitsev, Maxim; Bao, Shanglian
2009-10-01
In MRI, phase maps can provide useful information about parameters such as field inhomogeneity, velocity of blood flow, and the chemical shift between water and fat. As phase is defined in the (-pi,pi] range, however, phase wraps often occur, which complicates image analysis and interpretation. This work presents a two-dimensional phase unwrapping algorithm that uses quality-guided region growing and local linear estimation. The quality map employs the variance of the second-order partial derivatives of the phase as the quality criterion. Phase information from unwrapped neighboring pixels is used to predict the correct phase of the current pixel using a linear regression method. The algorithm was tested on both simulated and real data, and is shown to successfully unwrap phase images that are corrupted by noise and have rapidly changing phase. (c) 2009 Wiley-Liss, Inc.
Image Segmentation Analysis for NASA Earth Science Applications
NASA Technical Reports Server (NTRS)
Tilton, James C.
2010-01-01
NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.
NASA Astrophysics Data System (ADS)
Egger, Jan; Nimsky, Christopher
2016-03-01
Due to the aging population, spinal diseases get more and more common nowadays; e.g., lifetime risk of osteoporotic fracture is 40% for white women and 13% for white men in the United States. Thus the numbers of surgical spinal procedures are also increasing with the aging population and precise diagnosis plays a vital role in reducing complication and recurrence of symptoms. Spinal imaging of vertebral column is a tedious process subjected to interpretation errors. In this contribution, we aim to reduce time and error for vertebral interpretation by applying and studying the GrowCut - algorithm for boundary segmentation between vertebral body compacta and surrounding structures. GrowCut is a competitive region growing algorithm using cellular automata. For our study, vertebral T2-weighted Magnetic Resonance Imaging (MRI) scans were first manually outlined by neurosurgeons. Then, the vertebral bodies were segmented in the medical images by a GrowCut-trained physician using the semi-automated GrowCut-algorithm. Afterwards, results of both segmentation processes were compared using the Dice Similarity Coefficient (DSC) and the Hausdorff Distance (HD) which yielded to a DSC of 82.99+/-5.03% and a HD of 18.91+/-7.2 voxel, respectively. In addition, the times have been measured during the manual and the GrowCut segmentations, showing that a GrowCutsegmentation - with an average time of less than six minutes (5.77+/-0.73) - is significantly shorter than a pure manual outlining.
NASA Technical Reports Server (NTRS)
Walker, K. P.; Freed, A. D.
1991-01-01
New methods for integrating systems of stiff, nonlinear, first order, ordinary differential equations are developed by casting the differential equations into integral form. Nonlinear recursive relations are obtained that allow the solution to a system of equations at time t plus delta t to be obtained in terms of the solution at time t in explicit and implicit forms. Examples of accuracy obtained with the new technique are given by considering systems of nonlinear, first order equations which arise in the study of unified models of viscoplastic behaviors, the spread of the AIDS virus, and predator-prey populations. In general, the new implicit algorithm is unconditionally stable, and has a Jacobian of smaller dimension than that which is acquired by current implicit methods, such as the Euler backward difference algorithm; yet, it gives superior accuracy. The asymptotic explicit and implicit algorithms are suitable for solutions that are of the growing and decaying exponential kinds, respectively, whilst the implicit Euler-Maclaurin algorithm is superior when the solution oscillates, i.e., when there are regions in which both growing and decaying exponential solutions exist.
Attributed relational graphs for cell nucleus segmentation in fluorescence microscopy images.
Arslan, Salim; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem
2013-06-01
More rapid and accurate high-throughput screening in molecular cellular biology research has become possible with the development of automated microscopy imaging, for which cell nucleus segmentation commonly constitutes the core step. Although several promising methods exist for segmenting the nuclei of monolayer isolated and less-confluent cells, it still remains an open problem to segment the nuclei of more-confluent cells, which tend to grow in overlayers. To address this problem, we propose a new model-based nucleus segmentation algorithm. This algorithm models how a human locates a nucleus by identifying the nucleus boundaries and piecing them together. In this algorithm, we define four types of primitives to represent nucleus boundaries at different orientations and construct an attributed relational graph on the primitives to represent their spatial relations. Then, we reduce the nucleus identification problem to finding predefined structural patterns in the constructed graph and also use the primitives in region growing to delineate the nucleus borders. Working with fluorescence microscopy images, our experiments demonstrate that the proposed algorithm identifies nuclei better than previous nucleus segmentation algorithms.
Deformed Palmprint Matching Based on Stable Regions.
Wu, Xiangqian; Zhao, Qiushi
2015-12-01
Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.
Superpixel-based segmentation of glottal area from videolaryngoscopy images
NASA Astrophysics Data System (ADS)
Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail
2017-11-01
Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.
An iterative approach to region growing using associative memories
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Cowart, A.
1983-01-01
Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.
Csf Based Non-Ground Points Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Shen, A.; Zhang, W.; Shi, H.
2017-09-01
Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
Adaptive region-growing with maximum curvature strategy for tumor segmentation in 18F-FDG PET
NASA Astrophysics Data System (ADS)
Tan, Shan; Li, Laquan; Choi, Wookjin; Kang, Min Kyu; D'Souza, Warren D.; Lu, Wei
2017-07-01
Accurate tumor segmentation in PET is crucial in many oncology applications. We developed an adaptive region-growing (ARG) algorithm with a maximum curvature strategy (ARG_MC) for tumor segmentation in PET. The ARG_MC repeatedly applied a confidence connected region-growing algorithm with increasing relaxing factor f. The optimal relaxing factor (ORF) was then determined at the transition point on the f-volume curve, where the volume just grew from the tumor into the surrounding normal tissues. The ARG_MC along with five widely used algorithms were tested on a phantom with 6 spheres at different signal to background ratios and on two clinic datasets including 20 patients with esophageal cancer and 11 patients with non-Hodgkin lymphoma (NHL). The ARG_MC did not require any phantom calibration or any a priori knowledge of the tumor or PET scanner. The identified ORF varied with tumor types (mean ORF = 9.61, 3.78 and 2.55 respectively for the phantom, esophageal cancer, and NHL datasets), and varied from one tumor to another. For the phantom, the ARG_MC ranked the second in segmentation accuracy with an average Dice similarity index (DSI) of 0.86, only slightly worse than Daisne’s adaptive thresholding method (DSI = 0.87), which required phantom calibration. For both the esophageal cancer dataset and the NHL dataset, the ARG_MC had the highest accuracy with an average DSI of 0.87 and 0.84, respectively. The ARG_MC was robust to parameter settings and region of interest selection, and it did not depend on scanners, imaging protocols, or tumor types. Furthermore, the ARG_MC made no assumption about the tumor size or tumor uptake distribution, making it suitable for segmenting tumors with heterogeneous FDG uptake. In conclusion, the ARG_MC was accurate, robust and easy to use, it provides a highly potential tool for PET tumor segmentation in clinic.
Adaptive Region-Growing with Maximum Curvature Strategy for Tumor Segmentation in 18F-FDG PET
Tan, Shan; Li, Laquan; Choi, Wookjin; Kang, Min Kyu; D’Souza, Warren D.; Lu, Wei
2017-01-01
Accurate tumor segmentation in PET is crucial in many oncology applications. We developed an adaptive region-growing (ARG) algorithm with a maximum curvature strategy (ARG_MC) for tumor segmentation in PET. The ARG_MC repeatedly applied a confidence connected region-growing (CCRG) algorithm with increasing relaxing factor f. The optimal relaxing factor (ORF) was then determined at the transition point on the f-volume curve, where the volume just grew from the tumor into the surrounding normal tissues. The ARG_MC along with five widely used algorithms were tested on a phantom with 6 spheres at different signal to background ratios and on two clinic datasets including 20 patients with esophageal cancer and 11 patients with non-Hodgkin lymphoma (NHL). The ARG_MC did not require any phantom calibration or any a priori knowledge of the tumor or PET scanner. The identified ORF varied with tumor types (mean ORF = 9.61, 3.78 and 2.55 respectively for the phantom, esophageal cancer, and NHL datasets), and varied from one tumor to another. For the phantom, the ARG_MC ranked the second in segmentation accuracy with an average Dice similarity index (DSI) of 0.86, only slightly worse than Daisne’s adaptive thresholding method (DSI=0.87), which required phantom calibration. For both the esophageal cancer dataset and the NHL dataset, the ARG_MC had the highest accuracy with an average DSI of 0.87 and 0.84, respectively. The ARG_MC was robust to parameter settings and region of interest selection, and it did not depend on scanners, imaging protocols, or tumor types. Furthermore, the ARG_MC made no assumption about the tumor size or tumor uptake distribution, making it suitable for segmenting tumors with heterogeneous FDG uptake. In conclusion, the ARG_MC was accurate, robust and easy to use, it provides a highly potential tool for PET tumor segmentation in clinic. PMID:28604372
Semi-automatic 3D lung nodule segmentation in CT using dynamic programming
NASA Astrophysics Data System (ADS)
Sargent, Dustin; Park, Sun Young
2017-02-01
We present a method for semi-automatic segmentation of lung nodules in chest CT that can be extended to general lesion segmentation in multiple modalities. Most semi-automatic algorithms for lesion segmentation or similar tasks use region-growing or edge-based contour finding methods such as level-set. However, lung nodules and other lesions are often connected to surrounding tissues, which makes these algorithms prone to growing the nodule boundary into the surrounding tissue. To solve this problem, we apply a 3D extension of the 2D edge linking method with dynamic programming to find a closed surface in a spherical representation of the nodule ROI. The algorithm requires a user to draw a maximal diameter across the nodule in the slice in which the nodule cross section is the largest. We report the lesion volume estimation accuracy of our algorithm on the FDA lung phantom dataset, and the RECIST diameter estimation accuracy on the lung nodule dataset from the SPIE 2016 lung nodule classification challenge. The phantom results in particular demonstrate that our algorithm has the potential to mitigate the disparity in measurements performed by different radiologists on the same lesions, which could improve the accuracy of disease progression tracking.
Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix
NASA Astrophysics Data System (ADS)
Lange, Holger
2005-04-01
Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.
Fenrich, Keith K; Zhao, Ethan Y; Wei, Yuan; Garg, Anirudh; Rose, P Ken
2014-04-15
Isolating specific cellular and tissue compartments from 3D image stacks for quantitative distribution analysis is crucial for understanding cellular and tissue physiology under normal and pathological conditions. Current approaches are limited because they are designed to map the distributions of synapses onto the dendrites of stained neurons and/or require specific proprietary software packages for their implementation. To overcome these obstacles, we developed algorithms to Grow and Shrink Volumes of Interest (GSVI) to isolate specific cellular and tissue compartments from 3D image stacks for quantitative analysis and incorporated these algorithms into a user-friendly computer program that is open source and downloadable at no cost. The GSVI algorithm was used to isolate perivascular regions in the cortex of live animals and cell membrane regions of stained spinal motoneurons in histological sections. We tracked the real-time, intravital biodistribution of injected fluorophores with sub-cellular resolution from the vascular lumen to the perivascular and parenchymal space following a vascular microlesion, and mapped the precise distributions of membrane-associated KCC2 and gephyrin immunolabeling in dendritic and somatic regions of spinal motoneurons. Compared to existing approaches, the GSVI approach is specifically designed for isolating perivascular regions and membrane-associated regions for quantitative analysis, is user-friendly, and free. The GSVI algorithm is useful to quantify regional differences of stained biomarkers (e.g., cell membrane-associated channels) in relation to cell functions, and the effects of therapeutic strategies on the redistributions of biomolecules, drugs, and cells in diseased or injured tissues. Copyright © 2014 Elsevier B.V. All rights reserved.
An enhanced TIMESAT algorithm for estimating vegetation phenology metrics from MODIS data
Tan, B.; Morisette, J.T.; Wolfe, R.E.; Gao, F.; Ederer, G.A.; Nightingale, J.; Pedelty, J.A.
2011-01-01
An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates. ?? 2010 IEEE.
An Enhanced TIMESAT Algorithm for Estimating Vegetation Phenology Metrics from MODIS Data
NASA Technical Reports Server (NTRS)
Tan, Bin; Morisette, Jeffrey T.; Wolfe, Robert E.; Gao, Feng; Ederer, Gregory A.; Nightingale, Joanne; Pedelty, Jeffrey A.
2012-01-01
An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates.
An Augmented Reality Endoscope System for Ureter Position Detection.
Yu, Feng; Song, Enmin; Liu, Hong; Li, Yunlong; Zhu, Jun; Hung, Chih-Cheng
2018-06-25
Iatrogenic injury of ureter in the clinical operation may cause the serious complication and kidney damage. To avoid such a medical accident, it is necessary to provide the ureter position information to the doctor. For the detection of ureter position, an ureter position detection and display system with the augmented ris proposed to detect the ureter that is covered by human tissue. There are two key issues which should be considered in this new system. One is how to detect the covered ureter that cannot be captured by the electronic endoscope and the other is how to display the ureter position that provides stable and high-quality images. Simultaneously, any delayed processing of the system should disturb the surgery. The aided hardware detection method and target detection algorithms are proposed in this system. To mark the ureter position, a surface-lighting plastic optical fiber (POF) with the encoded light-emitting diode (LED) light is used to indicate the ureter position. The monochrome channel filtering algorithm (MCFA) is proposed to locate the ureter region more precisely. The ureter position is extracted using the proposed automatic region growing algorithm (ARGA) that utilizes the statistical information of the monochrome channel for the selection of growing seed point. In addition, according to the pulse signal of encoded light, the recognition of bright and dark frames based on the aided hardware (BDAH) is proposed to expedite the processing speed. Experimental results demonstrate that the proposed endoscope system can identify 92.04% ureter region in average.
Progressive data transmission for anatomical landmark detection in a cloud.
Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D
2012-01-01
In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.
NASA Astrophysics Data System (ADS)
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida
2015-05-01
Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.
Stereo-Based Region-Growing using String Matching
NASA Technical Reports Server (NTRS)
Mandelbaum, Robert; Mintz, Max
1995-01-01
We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds.
Low complexity pixel-based halftone detection
NASA Astrophysics Data System (ADS)
Ok, Jiheon; Han, Seong Wook; Jarno, Mielikainen; Lee, Chulhee
2011-10-01
With the rapid advances of the internet and other multimedia technologies, the digital document market has been growing steadily. Since most digital images use halftone technologies, quality degradation occurs when one tries to scan and reprint them. Therefore, it is necessary to extract the halftone areas to produce high quality printing. In this paper, we propose a low complexity pixel-based halftone detection algorithm. For each pixel, we considered a surrounding block. If the block contained any flat background regions, text, thin lines, or continuous or non-homogeneous regions, the pixel was classified as a non-halftone pixel. After excluding those non-halftone pixels, the remaining pixels were considered to be halftone pixels. Finally, documents were classified as pictures or photo documents by calculating the halftone pixel ratio. The proposed algorithm proved to be memory-efficient and required low computation costs. The proposed algorithm was easily implemented using GPU.
NASA Astrophysics Data System (ADS)
Glotsos, D.; Vassiou, K.; Kostopoulos, S.; Lavdas, El; Kalatzis, I.; Asvestas, P.; Arvanitis, D. L.; Fezoulidis, I. V.; Cavouras, D.
2014-03-01
The role of Magnetic Resonance Imaging (MRI) as an alternative protocol for screening of breast cancer has been intensively investigated during the past decade. Preliminary research results have indicated that gadolinium-agent administrative MRI scans may reveal the nature of breast lesions by analyzing the contrast-agent's uptake time. In this study, we attempt to deduce the same conclusion, however, from a different perspective by investigating, using image processing, the vascular network of the breast at two different time intervals following the administration of gadolinium. Twenty cases obtained from a 3.0-T MRI system (SIGNA HDx; GE Healthcare) were included in the study. A new modification of the Seeded Region Growing (SRG) algorithm was used to segment vessels from surrounding background. Delineated vessels were investigated by means of their topology, morphology and texture. Results have shown that it is possible to estimate the nature of the lesions with approximately 94.4% accuracy, thus, it may be claimed that the breast vascular network does encodes useful, patterned, information, which can be used for characterizing breast lesions.
Extraction of Capillary Non-perfusion from Fundus Fluorescein Angiogram
NASA Astrophysics Data System (ADS)
Sivaswamy, Jayanthi; Agarwal, Amit; Chawla, Mayank; Rani, Alka; Das, Taraprasad
Capillary Non-Perfusion (CNP) is a condition in diabetic retinopathy where blood ceases to flow to certain parts of the retina, potentially leading to blindness. This paper presents a solution for automatically detecting and segmenting CNP regions from fundus fluorescein angiograms (FFAs). CNPs are modelled as valleys, and a novel technique based on extrema pyramid is presented for trough-based valley detection. The obtained valley points are used to segment the desired CNP regions by employing a variance-based region growing scheme. The proposed algorithm has been tested on 40 images and validated against expert-marked ground truth. In this paper, we present results of testing and validation of our algorithm against ground truth and compare the segmentation performance against two others methods.The performance of the proposed algorithm is presented as a receiver operating characteristic (ROC) curve. The area under this curve is 0.842 and the distance of ROC from the ideal point (0,1) is 0.31. The proposed method for CNP segmentation was found to outperform the watershed [1] and heat-flow [2] based methods.
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Tsai, Du-Ming; Chuang, Wei-Che
2017-04-01
Solar power has become an attractive alternative source of energy. The multi-crystalline solar cell has been widely accepted in the market because it has a relatively low manufacturing cost. Multi-crystalline solar wafers with larger grain sizes and fewer grain boundaries are higher quality and convert energy more efficiently than mono-crystalline solar cells. In this article, a new image processing method is proposed for assessing the wafer quality. An adaptive segmentation algorithm based on region growing is developed to separate the closed regions of individual grains. Using the proposed method, the shape and size of each grain in the wafer image can be precisely evaluated. Two measures of average grain size are taken from the literature and modified to estimate the average grain size. The resulting average grain size estimate dictates the quality of the crystalline solar wafers and can be considered a viable quantitative indicator of conversion efficiency.
Automated mapping of burned areas in semi-arid ecosystems using modis time-series imagery
NASA Astrophysics Data System (ADS)
Hardtke, L. A.; Blanco, P. D.; del Valle, H. F.; Metternicht, G. I.; Sione, W. F.
2015-04-01
Understanding spatial and temporal patterns of burned areas at regional scales, provides a long-term perspective of fire processes and its effects on ecosystems and vegetation recovery patterns, and it is a key factor to design prevention and post-fire restoration plans and strategies. Standard satellite burned area and active fire products derived from the 500-m MODIS and SPOT are avail - able to this end. However, prior research caution on the use of these global-scale products for regional and sub-regional applica - tions. Consequently, we propose a novel algorithm for automated identification and mapping of burned areas at regional scale in semi-arid shrublands. The algorithm uses a set of the Normalized Burned Ratio Index products derived from MODIS time series; using a two-phased cycle, it firstly detects potentially burned pixels while keeping a low commission error (false detection of burned areas), and subsequently labels them as seed patches. Region growing image segmentation algorithms are applied to the seed patches in the second-phase, to define the perimeter of fire affected areas while decreasing omission errors (missing real burned areas). Independently-derived Landsat ETM+ burned-area reference data was used for validation purposes. The correlation between the size of burnt areas detected by the global fire products and independently-derived Landsat reference data ranged from R2 = 0.01 - 0.28, while our algorithm performed showed a stronger correlation coefficient (R2 = 0.96). Our findings confirm prior research calling for caution when using the global fire products locally or regionally.
Active mask segmentation of fluorescence microscope images.
Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena
2009-08-01
We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.
Algorithm for measuring the internal quantum efficiency of individual injection lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sommers, H.S. Jr.
1978-05-01
A new algorithm permits determination of the internal quantum efficiency eta/sub i/ of individual lasers. Above threshold, the current is partitioned into a ''coherent'' component driving the lasing modes and the ''noncoherent'' remainder. Below threshold the current is known to grow as exp(qV/n/sub 0/KT); the algorithm proposes that extrapolation of this equation into the lasing region measures the noncoherent remainder, enabling deduction of the coherent component and of its current derivative eta/sub i/. Measurements on five (AlGa)As double-heterojunction lasers cut from one wafer demonstrate the power of the new method. Comparison with band calculations of Stern shows that n/sub 0/more » originates in carrier degeneracy.« less
A multi-focus image fusion method via region mosaicking on Laplacian pyramids
Kou, Liang; Zhang, Liguo; Sun, Jianguo; Han, Qilong; Jin, Zilong
2018-01-01
In this paper, a method named Region Mosaicking on Laplacian Pyramids (RMLP) is proposed to fuse multi-focus images that is captured by microscope. First, the Sum-Modified-Laplacian is applied to measure the focus of multi-focus images. Then the density-based region growing algorithm is utilized to segment the focused region mask of each image. Finally, the mask is decomposed into a mask pyramid to supervise region mosaicking on a Laplacian pyramid. The region level pyramid keeps more original information than the pixel level. The experiment results show that RMLP has best performance in quantitative comparison with other methods. In addition, RMLP is insensitive to noise and can reduces the color distortion of the fused images on two datasets. PMID:29771912
Towards Automatic Semantic Labelling of 3D City Models
NASA Astrophysics Data System (ADS)
Rook, M.; Biljecki, F.; Diakité, A. A.
2016-10-01
The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.
Sarrafzadeh, Omid; Dehnavi, Alireza Mehri
2015-01-01
Segmentation of leukocytes acts as the foundation for all automated image-based hematological disease recognition systems. Most of the time, hematologists are interested in evaluation of white blood cells only. Digital image processing techniques can help them in their analysis and diagnosis. The main objective of this paper is to detect leukocytes from a blood smear microscopic image and segment them into their two dominant elements, nucleus and cytoplasm. The segmentation is conducted using two stages of applying K-means clustering. First, the nuclei are segmented using K-means clustering. Then, a proposed method based on region growing is applied to separate the connected nuclei. Next, the nuclei are subtracted from the original image. Finally, the cytoplasm is segmented using the second stage of K-means clustering. The results indicate that the proposed method is able to extract the nucleus and cytoplasm regions accurately and works well even though there is no significant contrast between the components in the image. In this paper, a method based on K-means clustering and region growing is proposed in order to detect leukocytes from a blood smear microscopic image and segment its components, the nucleus and the cytoplasm. As region growing step of the algorithm relies on the information of edges, it will not able to separate the connected nuclei more accurately in poor edges and it requires at least a weak edge to exist between the nuclei. The nucleus and cytoplasm segments of a leukocyte can be used for feature extraction and classification which leads to automated leukemia detection.
Nucleus and cytoplasm segmentation in microscopic images using K-means clustering and region growing
Sarrafzadeh, Omid; Dehnavi, Alireza Mehri
2015-01-01
Background: Segmentation of leukocytes acts as the foundation for all automated image-based hematological disease recognition systems. Most of the time, hematologists are interested in evaluation of white blood cells only. Digital image processing techniques can help them in their analysis and diagnosis. Materials and Methods: The main objective of this paper is to detect leukocytes from a blood smear microscopic image and segment them into their two dominant elements, nucleus and cytoplasm. The segmentation is conducted using two stages of applying K-means clustering. First, the nuclei are segmented using K-means clustering. Then, a proposed method based on region growing is applied to separate the connected nuclei. Next, the nuclei are subtracted from the original image. Finally, the cytoplasm is segmented using the second stage of K-means clustering. Results: The results indicate that the proposed method is able to extract the nucleus and cytoplasm regions accurately and works well even though there is no significant contrast between the components in the image. Conclusions: In this paper, a method based on K-means clustering and region growing is proposed in order to detect leukocytes from a blood smear microscopic image and segment its components, the nucleus and the cytoplasm. As region growing step of the algorithm relies on the information of edges, it will not able to separate the connected nuclei more accurately in poor edges and it requires at least a weak edge to exist between the nuclei. The nucleus and cytoplasm segments of a leukocyte can be used for feature extraction and classification which leads to automated leukemia detection. PMID:26605213
NASA Astrophysics Data System (ADS)
Tsagaan, Baigalmaa; Abe, Keiichi; Goto, Masahiro; Yamamoto, Seiji; Terakawa, Susumu
2006-03-01
This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.
NASA Astrophysics Data System (ADS)
Perugini, G.; Ricci-Tersenghi, F.
2018-01-01
We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.
Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.
2016-06-01
Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.
NASA Astrophysics Data System (ADS)
Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.
2017-11-01
Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.
On-line determination of pork color and intramuscular fat by computer vision
NASA Astrophysics Data System (ADS)
Liao, Yi-Tao; Fan, Yu-Xia; Wu, Xue-Qian; Xie, Li-juan; Cheng, Fang
2010-04-01
In this study, the application potential of computer vision in on-line determination of CIE L*a*b* and content of intramuscular fat (IMF) of pork was evaluated. Images of pork chop from 211 pig carcasses were captured while samples were on a conveyor belt at the speed of 0.25 m•s-1 to simulate the on-line environment. CIE L*a*b* and IMF content were measured with colorimeter and chemical extractor as reference. The KSW algorithm combined with region selection was employed in eliminating the surrounding fat of longissimus dorsi muscle (MLD). RGB values of the pork were counted and five methods were applied for transforming RGB values to CIE L*a*b* values. The region growing algorithm with multiple seed points was applied to mask out the IMF pixels within the intensity corrected images. The performances of the proposed algorithms were verified by comparing the measured reference values and the quality characteristics obtained by image processing. MLD region of six samples could not be identified using the KSW algorithm. Intensity nonuniformity of pork surface in the image can be eliminated efficiently, and IMF region of three corrected images failed to be extracted. Given considerable variety of color and complexity of the pork surface, CIE L*, a* and b* color of MLD could be predicted with correlation coefficients of 0.84, 0.54 and 0.47 respectively, and IMF content could be determined with a correlation coefficient more than 0.70. The study demonstrated that it is feasible to evaluate CIE L*a*b* values and IMF content on-line using computer vision.
Extraction of edge-based and region-based features for object recognition
NASA Astrophysics Data System (ADS)
Coutts, Benjamin; Ravi, Srinivas; Hu, Gongzhu; Shrikhande, Neelima
1993-08-01
One of the central problems of computer vision is object recognition. A catalogue of model objects is described as a set of features such as edges and surfaces. The same features are extracted from the scene and matched against the models for object recognition. Edges and surfaces extracted from the scenes are often noisy and imperfect. In this paper algorithms are described for improving low level edge and surface features. Existing edge extraction algorithms are applied to the intensity image to obtain edge features. Initial edges are traced by following directions of the current contour. These are improved by using corresponding depth and intensity information for decision making at branch points. Surface fitting routines are applied to the range image to obtain planar surface patches. An algorithm of region growing is developed that starts with a coarse segmentation and uses quadric surface fitting to iteratively merge adjacent regions into quadric surfaces based on approximate orthogonal distance regression. Surface information obtained is returned to the edge extraction routine to detect and remove fake edges. This process repeats until no more merging or edge improvement can take place. Both synthetic (with Gaussian noise) and real images containing multiple object scenes have been tested using the merging criteria. Results appeared quite encouraging.
NASA Technical Reports Server (NTRS)
1994-01-01
With the growing awareness and debate over the potential changes associated with global climate change, the polar regions are receiving increased attention. Global cloud distributions can be expected to be altered by increased greenhouse forcing. Owing to the similarity of cloud and snow-ice spectral signatures in both the visible and infrared wavelengths, it is difficult to distinguish clouds from surface features in the polar regions. This work is directed towards the development of algorithms for the ASTER and HIRIS science/instrument teams. Special emphasis is placed on a wide variety of cloud optical property retrievals, and especially retrievals of cloud and surface properties in the polar regions.
NASA Astrophysics Data System (ADS)
Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik
2015-06-01
As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.
Semi-automated mapping of burned areas in semi-arid ecosystems using MODIS time-series imagery
NASA Astrophysics Data System (ADS)
Hardtke, Leonardo A.; Blanco, Paula D.; Valle, Héctor F. del; Metternicht, Graciela I.; Sione, Walter F.
2015-06-01
Understanding spatial and temporal patterns of burned areas at regional scales, provides a long-term perspective of fire processes and its effects on ecosystems and vegetation recovery patterns, and it is a key factor to design prevention and post-fire restoration plans and strategies. Remote sensing has become the most widely used tool to detect fire affected areas over large tracts of land (e.g., ecosystem, regional and global levels). Standard satellite burned area and active fire products derived from the 500-m Moderate Resolution Imaging Spectroradiometer (MODIS) and the Satellite Pour l'Observation de la Terre (SPOT) are available to this end. However, prior research caution on the use of these global-scale products for regional and sub-regional applications. Consequently, we propose a novel semi-automated algorithm for identification and mapping of burned areas at regional scale. The semi-arid Monte shrublands, a biome covering 240,000 km2 in the western part of Argentina, and exposed to seasonal bushfires was selected as the test area. The algorithm uses a set of the normalized burned ratio index products derived from MODIS time series; using a two-phased cycle, it firstly detects potentially burned pixels while keeping a low commission error (false detection of burned areas), and subsequently labels them as seed patches. Region growing image segmentation algorithms are applied to the seed patches in the second-phase, to define the perimeter of fire affected areas while decreasing omission errors (missing real burned areas). Independently-derived Landsat ETM+ burned-area reference data was used for validation purposes. Additionally, the performance of the adaptive algorithm was assessed against standard global fire products derived from MODIS Aqua and Terra satellites, total burned area (MCD45A1), the active fire algorithm (MOD14); and the L3JRC SPOT VEGETATION 1 km GLOBCARBON products. The correlation between the size of burned areas detected by the global fire products and independently-derived Landsat reference data ranged from R2 = 0.01-0.28, while our algorithm performed showed a stronger correlation coefficient (R2 = 0.96). Our findings confirm prior research calling for caution when using the global fire products locally or regionally.
Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng
2013-09-01
Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the proposed two-stage rule-constrained seedless region growing approach. The accuracy achieved with the two-stage approach is higher than CRG and 3D level set.
VirSSPA- a virtual reality tool for surgical planning workflow.
Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T
2009-03-01
A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.
Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.
2015-01-01
The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as for hail damage swath identification. Initial validation of the automated algorithm is based upon Storm Prediction Center storm reports but also the National Severe Storm Laboratory (NSSL) Maximum Estimated Size Hail (MESH) product. Opportunities for future work are also shown, with focus on expansion of this algorithm with pixel-based image classification techniques for tracking surface changes as a result of severe weather.
BgCut: automatic ship detection from UAV images.
Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.
Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation
NASA Astrophysics Data System (ADS)
An, Lu; Guo, Baolong
2018-03-01
Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).
BgCut: Automatic Ship Detection from UAV Images
Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182
NASA Astrophysics Data System (ADS)
Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik
2001-05-01
Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.
Local Surface Reconstruction from MER images using Stereo Workstation
NASA Astrophysics Data System (ADS)
Shin, Dongjoe; Muller, Jan-Peter
2010-05-01
The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL-HRSC reconstruction workflow. This algorithm's performance is reasonable even for close-range imagery so long as the stereo -pair does not too large a baseline displacement. For post-processing, a Bundle Adjustment (BA) is used to optimise the initial calibration parameters, which bootstrap the reconstruction results. Amongst many options for the non-linear optimisation, the LMA has been adopted due to its stability so that the BA searches the best calibration parameters whilst iteratively minimising the re-projection errors of the initial reconstruction points. For the evaluation of the proposed method, the result of the method is compared with the reconstruction from a disparity map provided by JPL using their operational processing system. Visual and quantitative comparison will be presented as well as updated camera parameters. As part of future work, we will investigate a method expediting the processing speed of the stereo region growing process and look into the possibility of extending the use of the stereo workstation to orbital image processing. Such an interactive stereo workstation can also be used to digitize points and line features as well as assess the accuracy of stereo processed results produced from other stereo matching algorithms available from within the consortium and elsewhere. It can also provide "ground truth" when suitably refined for stereo matching algorithms as well as provide visual cues as to why these matching algorithms sometimes fail to mitigate this in the future. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".
NASA Astrophysics Data System (ADS)
You, Wonsang; Andescavage, Nickie; Zun, Zungho; Limperopoulos, Catherine
2017-03-01
Intravoxel incoherent motion (IVIM) magnetic resonance imaging is an emerging non-invasive technique that has been recently applied to quantify in vivo global placental perfusion. We propose a robust semi-automated method for segmenting the placenta into fetal and maternal compartments from IVIM data, using a multi-label image segmentation algorithm called `GrowCut'. Placental IVIM data were acquired on a 1.5T scanner from 16 healthy pregnant women between 21-37 gestational weeks. The voxel-wise perfusion fraction was then estimated after non-rigid image registration. The seed regions of the fetal and maternal compartments were determined using structural T2-weighted reference images, and improved progressively through an iterative process of the GrowCut algorithm to accurately encompass fetal and maternal compartments. We demonstrated that the placental perfusion fraction decreased in both fetal (-0.010/week) and maternal compartments (-0.013/week) while their relative difference (ffetal-fmaternal) gradually increased with advancing gestational age (+0.003/week, p=0.065). Our preliminary results show that the proposed method was effective in distinguishing placental compartments using IVIM.
Intra-seasonal mapping of CO2 flux in rangelands of northern Kazakhstan at one-kilometer resolution
Wylie, B.K.; Gilmanov, T.G.; Johnson, D.A.; Saliendra, Nicanor Z.; Akshalov, K.; Tieszen, L.L.; Reed, B.C.; Laca, Emilio
2004-01-01
Algorithms that establish relationships between variables obtained through remote sensing and geographic information system (GIS) technologies are needed to allow the scaling up of site-specific CO2 flux measurements to regional levels. We obtained Bowen ratio-energy balance (BREB) flux tower measurements during the growing seasons of 1998-2000 above a grassland steppe in Kazakhstan. These BREB data were analyzed using ecosystem light-curve equations to quantify 10-day CO2 fluxes associated with gross primary production (GPP) and total respiration (R). Remotely sensed, temporally smoothed normalized difference vegetation index (NDVIsm) and environmental variables were used to develop multiple regression models for the mapping of 10-day CO2 fluxes for the Kazakh steppe. Ten-day GPP was estimated (R 2 = 0.72) by day of year (DOY) and NDVIsm, and 10-day R was estimated (R2 = 0.48) with the estimated GPP and estimated 10-day photosynthetically active radiation (PAR). Regression tree analysis estimated 10-day PAR from latitude, NDVIsm, DOY, and precipitation (R2 = 0.81). Fivefold cross-validation indicated that these algorithms were reasonably robust. GPP, R, and resulting net ecosystem exchange (NEE) were mapped for the Kazakh steppe grassland every 10 days and summed to produce regional growing season estimates of GPP, R, and NEE. Estimates of 10-day NEE agreed well with BREB observations in 2000, showing a slight underestimation in the late summer. Growing season (May to October) mean NEE for Kazakh steppe grasslands was 1.27 Mg C/ha in 2000. Winter flux data were collected during the winter of 2001-2002 and are being analyzed to close the annual carbon budget for the Kazakh steppe. ?? 2004 Springer-Verlag New York, LLC.
NASA Astrophysics Data System (ADS)
Ma, Ming; Wang, Huafeng; Liu, Yan; Zhang, Hao; Gu, Xianfeng; Liang, Zhengrong
2014-03-01
Cone-beam computed tomography (CBCT) has attracted growing interest of researchers in image reconstruction. The mAs level of the X-ray tube current, in practical application of CBCT, is mitigated in order to reduce the CBCT dose. The lowering of the X-ray tube current, however, results in the degradation of image quality. Thus, low-dose CBCT image reconstruction is in effect a noise problem. To acquire clinically acceptable quality of image, and keep the X-ray tube current as low as achievable in the meanwhile, some penalized weighted least-squares (PWLS)-based image reconstruction algorithms have been developed. One representative strategy in previous work is to model the prior information for solution regularization using an anisotropic penalty term. To enhance the edge preserving and noise suppressing in a finer scale, a novel algorithm combining the local binary pattern (LBP) with penalized weighted leastsquares (PWLS), called LBP-PWLS-based image reconstruction algorithm, is proposed in this work. The proposed LBP-PWLS-based algorithm adaptively encourages strong diffusion on the local spot/flat region around a voxel and less diffusion on edge/corner ones by adjusting the penalty for cost function, after the LBP is utilized to detect the region around the voxel as spot, flat and edge ones. The LBP-PWLS-based reconstruction algorithm was evaluated using the sinogram data acquired by a clinical CT scanner from the CatPhan® 600 phantom. Experimental results on the noiseresolution tradeoff measurement and other quantitative measurements demonstrated its feasibility and effectiveness in edge preserving and noise suppressing in comparison with a previous PWLS reconstruction algorithm.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Image analysis for skeletal evaluation of carpal bones
NASA Astrophysics Data System (ADS)
Ko, Chien-Chuan; Mao, Chi-Wu; Lin, Chi-Jen; Sun, Yung-Nien
1995-04-01
The assessment of bone age is an important field to the pediatric radiology. It provides very important information for treatment and prediction of skeletal growth in a developing child. So far, various computerized algorithms for automatically assessing the skeletal growth have been reported. Most of these methods made attempt to analyze the phalangeal growth. The most fundamental step in these automatic measurement methods is the image segmentation that extracts bones from soft-tissue and background. These automatic segmentation methods of hand radiographs can roughly be categorized into two main approaches that are edge and region based methods. This paper presents a region-based carpal-bone segmentation approach. It is organized into four stages: contrast enhancement, moment-preserving thresholding, morphological processing, and region-growing labeling.
Timp, Sheila; Karssemeijer, Nico
2004-05-01
Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area Az under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in Az values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant.
An algorithm for calculi segmentation on ureteroscopic images.
Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme
2011-03-01
The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.
Quantification of human body fat tissue percentage by MRI.
Müller, Hans-Peter; Raudies, Florian; Unrath, Alexander; Neumann, Heiko; Ludolph, Albert C; Kassubek, Jan
2011-01-01
The MRI-based evaluation of the quantity and regional distribution of adipose tissue is one objective measure in the investigation of obesity. The aim of this article was to report a comprehensive and automatic analytical method for the determination of the volumes of subcutaneous fat tissue (SFT) and visceral fat tissue (VFT) in either the whole human body or selected slices or regions of interest. Using an MRI protocol in an examination position that was convenient for volunteers and patients with severe diseases, 22 healthy subjects were examined. The software platform was able to merge MRI scans of several body regions acquired in separate acquisitions. Through a cascade of image processing steps, SFT and VFT volumes were calculated. Whole-body SFT and VFT distributions, as well as fat distributions of defined body slices, were analysed in detail. Complete three-dimensional datasets were analysed in a reproducible manner with as few operator-dependent interventions as possible. In order to determine the SFT volume, the ARTIS (Adapted Rendering for Tissue Intensity Segmentation) algorithm was introduced. The advantage of the ARTIS algorithm was the delineation of SFT volumes in regions in which standard region grow techniques fail. Using the ARTIS algorithm, an automatic SFT volume detection was feasible. MRI data analysis was able to determine SFT and VFT volume percentages using new analytical strategies. With the techniques described, it was possible to detect changes in SFT and VFT percentages of the whole body and selected regions. The techniques presented in this study are likely to be of use in obesity-related investigations, as well as in the examination of longitudinal changes in weight during various medical conditions. Copyright © 2010 John Wiley & Sons, Ltd.
A hybrid lung and vessel segmentation algorithm for computer aided detection of pulmonary embolism
NASA Astrophysics Data System (ADS)
Raghupathi, Laks; Lakare, Sarang
2009-02-01
Advances in multi-detector technology have made CT pulmonary angiography (CTPA) a popular radiological tool for pulmonary emboli (PE) detection. CTPA provide rich detail of lung anatomy and is a useful diagnostic aid in highlighting even very small PE. However analyzing hundreds of slices is laborious and time-consuming for the practicing radiologist which may also cause misdiagnosis due to the presence of various PE look-alike. Computer-aided diagnosis (CAD) can be a potential second reader in providing key diagnostic information. Since PE occurs only in vessel arteries, it is important to mark this region of interest (ROI) during CAD preprocessing. In this paper, we present a new lung and vessel segmentation algorithm for extracting contrast-enhanced vessel ROI in CTPA. Existing approaches to segmentation either provide only the larger lung area without highlighting the vessels or is computationally prohibitive. In this paper, we propose a hybrid lung and vessel segmentation which uses an initial lung ROI and determines the vessels through a series of refinement steps. We first identify a coarse vessel ROI by finding the "holes" from the lung ROI. We then use the initial ROI as seed-points for a region-growing process while carefully excluding regions which are not relevant. The vessel segmentation mask covers 99% of the 259 PE from a real-world set of 107 CTPA. Further, our algorithm increases the net sensitivity of a prototype CAD system by 5-9% across all PE categories in the training and validation data sets. The average run-time of algorithm was only 100 seconds on a standard workstation.
The life-cycle of upper-tropospheric jet streams identified with a novel data segmentation algorithm
NASA Astrophysics Data System (ADS)
Limbach, S.; Schömer, E.; Wernli, H.
2010-09-01
Jet streams are prominent features of the upper-tropospheric atmospheric flow. Through the thermal wind relationship these regions with intense horizontal wind speed (typically larger than 30 m/s) are associated with pronounced baroclinicity, i.e., with regions where extratropical cyclones develop due to baroclinic instability processes. Individual jet streams are non-stationary elongated features that can extend over more than 2000 km in the along-flow and 200-500 km in the across-flow direction, respectively. Their lifetime can vary between a few days and several weeks. In recent years, feature-based algorithms have been developed that allow compiling synoptic climatologies and typologies of upper-tropospheric jet streams based upon objective selection criteria and climatological reanalysis datasets. In this study a novel algorithm to efficiently identify jet streams using an extended region-growing segmentation approach is introduced. This algorithm iterates over a 4-dimensional field of horizontal wind speed from ECMWF analyses and decides at each grid point whether all prerequisites for a jet stream are met. In a single pass the algorithm keeps track of all adjacencies of these grid points and creates the 4-dimensional connected segments associated with each jet stream. In addition to the detection of these sets of connected grid points, the algorithm analyzes the development over time of the distinct 3-dimensional features each segment consists of. Important events in the development of these features, for example mergings and splittings, are detected and analyzed on a per-grid-point and per-feature basis. The output of the algorithm consists of the actual sets of grid-points augmented with information about the particular events, and of the so-called event graphs, which are an abstract representation of the distinct 3-dimensional features and events of each segment. This technique provides comprehensive information about the frequency of upper-tropospheric jet streams, their preferred regions of genesis, merging, splitting, and lysis, and statistical information about their size, amplitude and lifetime. The presentation will introduce the technique, provide example visualizations of the time evolution of the identified 3-dimensional jet stream features, and present results from a first multi-month "climatology" of upper-tropospheric jets. In the future, the technique can be applied to longer datasets, for instance reanalyses and output from global climate model simulations - and provide detailed information about key characteristics of jet stream life cycles.
Terrestrial cross-calibrated assimilation of various datasources
NASA Astrophysics Data System (ADS)
Groß, André; Müller, Richard; Schömer, Elmar; Trentmann, Jörg
2014-05-01
We introduce a novel software tool, ANACLIM, for the efficient assimilation of multiple two-dimensional data sets using a variational approach. We consider a single objective function in two spatial coordinates with higher derivatives. This function measures the deviation of the input data from the target data set. By using the Euler-Lagrange formalism the minimization of this objective function can be transformed into a sparse system of linear equations, which can be efficiently solved by a conjugate gradient solver on a desktop workstation. The objective function allows for a series of physically-motivated constraints. The user can control the relative global weights, as well as the individual weight of each constraint on a per-grid-point level. The different constraints are realized as separate terms of the objective function: One similarity term for each input data set and two additional smoothness terms, penalizing high gradient and curvature values. ANACLIM is designed to combine similarity and smoothness operators easily and to choose different solvers. We performed a series of benchmarks to calibrate and verify our solution. We use, for example, terrestrial stations of BSRN and GEBA for the solar incoming flux and AERONET stations for aerosol optical depth. First results show that the combination of these data sources gain a significant benefit against the input datasets with our approach. ANACLIM also includes a region growing algorithm for the assimilation of ground based data. The region growing algorithm computes the maximum area around a station that represents the station data. The regions are grown under several constraints like the homogeneity of the area. The resulting dataset is then used within the assimilation process. Verification is performed by cross-validation. The method and validation results will be presented and discussed.
NASA Astrophysics Data System (ADS)
Fan, Tian-E.; Shao, Gui-Fang; Ji, Qing-Shuang; Zheng, Ji-Wen; Liu, Tun-dong; Wen, Yu-Hua
2016-11-01
Theoretically, the determination of the structure of a cluster is to search the global minimum on its potential energy surface. The global minimization problem is often nondeterministic-polynomial-time (NP) hard and the number of local minima grows exponentially with the cluster size. In this article, a multi-populations multi-strategies differential evolution algorithm has been proposed to search the globally stable structure of Fe and Cr nanoclusters. The algorithm combines a multi-populations differential evolution with an elite pool scheme to keep the diversity of the solutions and avoid prematurely trapping into local optima. Moreover, multi-strategies such as growing method in initialization and three differential strategies in mutation are introduced to improve the convergence speed and lower the computational cost. The accuracy and effectiveness of our algorithm have been verified by comparing the results of Fe clusters with Cambridge Cluster Database. Meanwhile, the performance of our algorithm has been analyzed by comparing the convergence rate and energy evaluations with the classical DE algorithm. The multi-populations, multi-strategies mutation and growing method in initialization in our algorithm have been considered respectively. Furthermore, the structural growth pattern of Cr clusters has been predicted by this algorithm. The results show that the lowest-energy structure of Cr clusters contains many icosahedra, and the number of the icosahedral rings rises with increasing size.
Range data description based on multiple characteristics
NASA Technical Reports Server (NTRS)
Al-Hujazi, Ezzet; Sood, Arun
1988-01-01
An algorithm for describing range images based on Mean curvature (H) and Gaussian curvature (K) is presented. Range images are unique in that they directly approximate the physical surfaces of a real world 3-D scene. The curvature parameters are derived from the fundamental theorems of differential geometry and provides visible invariant pixel labels that can be used to characterize the scene. The sign of H and K can be used to classify each pixel into one of eight possible surface types. Due to the sensitivity of these parameters to noise the resulting HK-sing map does not directly identify surfaces in the range images and must be further processed. A region growing algorithm based on modeling the scene points with a Markov Random Field (MRF) of variable neighborhood size and edge models is suggested. This approach allows the integration of information from multiple characteristics in an efficient way. The performance of the proposed algorithm on a number of synthetic and real range images is discussed.
NASA Astrophysics Data System (ADS)
Park, Bumwoo; Furlan, Alessandro; Patil, Amol; Bae, Kyongtae T.
2010-03-01
Pulmonary embolism (PE) is a medical condition defined as the obstruction of pulmonary arteries by a blood clot, usually originating in the deep veins of the lower limbs. PE is a common but elusive illness that can cause significant disability and death if not promptly diagnosed and effectively treated. CT Pulmonary Angiography (CTPA) is the first line imaging study for the diagnosis of PE. While clinical prediction rules have been recently developed to associate short-term risks and stratify patients with acute PE, there is a dearth of objective biomarkers associated with the long-term prognosis of the disease. Clot (embolus) burden is a promising biomarker for the prognosis and recurrence of PE and can be quantified from CTPA images. However, to our knowledge, no study has reported a method for segmentation and measurement of clot from CTPA images. Thus, the purpose of this study was to develop a semi-automated method for segmentation and measurement of clot from CTPA images. Our method was based on Modified Seeded Region Growing (MSRG) algorithm which consisted of two steps: (1) the observer identifies a clot of interest on CTPA images and places a spherical seed over the clot; and (2) a region grows around the seed on the basis of a rolling-ball process that clusters the neighboring voxels whose CT attenuation values are within the range of the mean +/- two standard deviations of the initial seed voxels. The rollingball propagates iteratively until the clot is completely clustered and segmented. Our experimental results revealed that the performance of the MSRG was superior to that of the conventional SRG for segmenting clots, as evidenced by reduced degrees of over- or under-segmentation from adjacent anatomical structures. To assess the clinical value of clot burden for the prognosis of PE, we are currently applying the MSRG for the segmentation and volume measurement of clots from CTPA images that are acquired in a large cohort of patients with PE in an on-going NIH-sponsored clinical trial.
Segmentation of suspicious objects in an x-ray image using automated region filling approach
NASA Astrophysics Data System (ADS)
Fu, Kenneth; Guest, Clark; Das, Pankaj
2009-08-01
To accommodate the flow of commerce, cargo inspection systems require a high probability of detection and low false alarm rate while still maintaining a minimum scan speed. Since objects of interest (high atomic-number metals) will often be heavily shielded to avoid detection, any detection algorithm must be able to identify such objects despite the shielding. Since pixels of a shielded object have a greater opacity than the shielding, we use a clustering method to classify objects in the image by pixel intensity levels. We then look within each intensity level region for sub-clusters of pixels with greater opacity than the surrounding region. A region containing an object has an enclosed-contour region (a hole) inside of it. We apply a region filling technique to fill in the hole, which represents a shielded object of potential interest. One method for region filling is seed-growing, which puts a "seed" starting point in the hole area and uses a selected structural element to fill out that region. However, automatic seed point selection is a hard problem; it requires additional information to decide if a pixel is within an enclosed region. Here, we propose a simple, robust method for region filling that avoids the problem of seed point selection. In our approach, we calculate the gradient Gx and Gy at each pixel in a binary image, and fill in 1s between a pair of x1 Gx(x1,y)=-1 and x2 Gx(x2,y)=1, and do the same thing in y-direction. The intersection of the two results will be filled region. We give a detailed discussion of our algorithm, discuss the strengths this method has over other methods, and show results of using our method.
Self-growing neural network architecture using crisp and fuzzy entropy
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.
1992-01-01
The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed.
Self-growing neural network architecture using crisp and fuzzy entropy
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.
1992-01-01
The paper briefly describes the self-growing neural network algorithm, CID3, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results for a real-life recognition problem of distinguishing defects in a glass ribbon, and for a benchmark problen of telling two spirals apart are shown and discussed.
NASA Astrophysics Data System (ADS)
Chakrabarti, S.; Judge, J.; Bindlish, R.; Bongiovanni, T.; Jackson, T. J.
2016-12-01
The NASA Soil Moisture Active Passive (SMAP) mission provides global observations of brightness temperatures (TB) at 36km. For these observations to be relevant to studies in agricultural regions, the TB values need to be downscaled to finer resolutions. In this study, a machine learning algorithm is introduced for downscaling of TB from 36km to 9km. The algorithm uses image segmentation to cluster the study region based on meteorological and land cover similarity, followed by a support vector machine based regression that computes the value of the disaggregated TB at all pixels. High resolution remote sensing products such as land surface temperature, normalized difference vegetation index, enhanced vegetation index, precipitation, soil texture, and land-cover were used for downscaling. The algorithm was implemented in Iowa, United States, during the growing season from April to July 2015 when the SMAP L3-SM_AP TB product at 9 km was available for comparison. In addition, the downscaled estimates from the algorithm are compared with 9km TB obtained by resampling SMAP L1B_TB product at 36km. It was found that the downscaled TB were very similar to the SMAP-L3_SM _AP TB product, even for vegetated areas with a mean difference ≤ 5K. However, the standard deviation of the downscaled was lower by 7K than that of the AP product. The probability density functions of the downscaled TB were similar to the SMAP- TB. The results indicate that these downscaling algorithms may be used for downscaling TB using complex non-linear correlations on a grid without using active microwave observations.
Automated detection of diabetic retinopathy on digital fundus images.
Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D
2002-02-01
The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M; Woo, B; Kim, J
Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automaticallymore » from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.« less
Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment
2013-01-01
Background Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. Results In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Conclusion Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA. PMID:24564200
Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment.
Nagar, Anurag; Hahsler, Michael
2013-01-01
Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA.
Unsupervised segmentation of lungs from chest radiographs
NASA Astrophysics Data System (ADS)
Ghosh, Payel; Antani, Sameer K.; Long, L. Rodney; Thoma, George R.
2012-03-01
This paper describes our preliminary investigations for deriving and characterizing coarse-level textural regions present in the lung field on chest radiographs using unsupervised grow-cut (UGC), a cellular automaton based unsupervised segmentation technique. The segmentation has been performed on a publicly available data set of chest radiographs. The algorithm is useful for this application because it automatically converges to a natural segmentation of the image from random seed points using low-level image features such as pixel intensity values and texture features. Our goal is to develop a portable screening system for early detection of lung diseases for use in remote areas in developing countries. This involves developing automated algorithms for screening x-rays as normal/abnormal with a high degree of sensitivity, and identifying lung disease patterns on chest x-rays. Automatically deriving and quantitatively characterizing abnormal regions present in the lung field is the first step toward this goal. Therefore, region-based features such as geometrical and pixel-value measurements were derived from the segmented lung fields. In the future, feature selection and classification will be performed to identify pathological conditions such as pulmonary tuberculosis on chest radiographs. Shape-based features will also be incorporated to account for occlusions of the lung field and by other anatomical structures such as the heart and diaphragm.
A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods
Tan, Hanqing; Fujita, Hiroshi
2013-01-01
This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016
Vehicle Detection of Aerial Image Using TV-L1 Texture Decomposition
NASA Astrophysics Data System (ADS)
Wang, Y.; Wang, G.; Li, Y.; Huang, Y.
2016-06-01
Vehicle detection from high-resolution aerial image facilitates the study of the public traveling behavior on a large scale. In the context of road, a simple and effective algorithm is proposed to extract the texture-salient vehicle among the pavement surface. Texturally speaking, the majority of pavement surface changes a little except for the neighborhood of vehicles and edges. Within a certain distance away from the given vector of the road network, the aerial image is decomposed into a smoothly-varying cartoon part and an oscillatory details of textural part. The variational model of Total Variation regularization term and L1 fidelity term (TV-L1) is adopted to obtain the salient texture of vehicles and the cartoon surface of pavement. To eliminate the noise of texture decomposition, regions of pavement surface are refined by seed growing and morphological operation. Based on the shape saliency analysis of the central objects in those regions, vehicles are detected as the objects of rectangular shape saliency. The proposed algorithm is tested with a diverse set of aerial images that are acquired at various resolution and scenarios around China. Experimental results demonstrate that the proposed algorithm can detect vehicles at the rate of 71.5% and the false alarm rate of 21.5%, and that the speed is 39.13 seconds for a 4656 x 3496 aerial image. It is promising for large-scale transportation management and planning.
A New Algorithm Using the Non-Dominated Tree to Improve Non-Dominated Sorting.
Gustavsson, Patrik; Syberfeldt, Anna
2018-01-01
Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This article presents a new, more efficient algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the article, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.
NASA Astrophysics Data System (ADS)
Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie
2012-03-01
Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.
Computer assisted diagnostic system in tumor radiography.
Faisal, Ahmed; Parveen, Sharmin; Badsha, Shahriar; Sarwar, Hasan; Reza, Ahmed Wasif
2013-06-01
An improved and efficient method is presented in this paper to achieve a better trade-off between noise removal and edge preservation, thereby detecting the tumor region of MRI brain images automatically. Compass operator has been used in the fourth order Partial Differential Equation (PDE) based denoising technique to preserve the anatomically significant information at the edges. A new morphological technique is also introduced for stripping skull region from the brain images, which consequently leading to the process of detecting tumor accurately. Finally, automatic seeded region growing segmentation based on an improved single seed point selection algorithm is applied to detect the tumor. The method is tested on publicly available MRI brain images and it gives an average PSNR (Peak Signal to Noise Ratio) of 36.49. The obtained results also show detection accuracy of 99.46%, which is a significant improvement than that of the existing results.
Detection of text strings from mixed text/graphics images
NASA Astrophysics Data System (ADS)
Tsai, Chien-Hua; Papachristou, Christos A.
2000-12-01
A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.
Pancreas and cyst segmentation
NASA Astrophysics Data System (ADS)
Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie
2016-03-01
Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.
Novel multimodality segmentation using level sets and Jensen-Rényi divergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva
2013-12-15
Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less
Novel multimodality segmentation using level sets and Jensen-Rényi divergence.
Markel, Daniel; Zaidi, Habib; El Naqa, Issam
2013-12-01
Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.
Stream Clustering of Growing Objects
NASA Astrophysics Data System (ADS)
Siddiqui, Zaigham Faraz; Spiliopoulou, Myra
We study incremental clustering of objects that grow and accumulate over time. The objects come from a multi-table stream e.g. streams of
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039
Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm
NASA Astrophysics Data System (ADS)
Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter
2004-05-01
The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.
Euskirchen, E.S.; Carman, T.B.; McGuire, Anthony David
2013-01-01
The phenology of arctic ecosystems is driven primarily by abiotic forces, with temperature acting as the main determinant of growing season onset and leaf budburst in the spring. However, while the plant species in arctic ecosystems require differing amounts of accumulated heat for leaf-out, dynamic vegetation models simulated over regional to global scales typically assume some average leaf-out for all of the species within an ecosystem. Here, we make use of air temperature records and observations of spring leaf phenology collected across dominant groupings of species (dwarf birch shrubs, willow shrubs, other deciduous shrubs, grasses, sedges, and forbs) in arctic and boreal ecosystems in Alaska. We then parameterize a dynamic vegetation model based on these data for four types of tundra ecosystems (heath tundra, shrub tundra, wet sedge tundra, and tussock tundra), as well as ecotonal boreal white spruce forest, and perform model simulations for the years 1970 -2100. Over the course of the model simulations, we found changes in ecosystem composition under this new phenology algorithm compared to simulations with the previous phenology algorithm. These changes were the result of the differential timing of leaf-out, as well as the ability for the groupings of species to compete for nitrogen and light availability. Regionally, there were differences in the trends of the carbon pools and fluxes between the new phenology algorithm and the previous phenology algorithm, although these differences depended on the future climate scenario. These findings indicate the importance of leaf phenology data collection by species and across the various ecosystem types within the highly heterogeneous Arctic landscape, and that dynamic vegetation models should consider variation in leaf-out by groupings of species within these ecosystems to make more accurate projections of future plant distributions and carbon cycling in Arctic regions.
Euskirchen, Eugénie S; Carman, Tobey B; McGuire, A David
2014-03-01
The phenology of arctic ecosystems is driven primarily by abiotic forces, with temperature acting as the main determinant of growing season onset and leaf budburst in the spring. However, while the plant species in arctic ecosystems require differing amounts of accumulated heat for leaf-out, dynamic vegetation models simulated over regional to global scales typically assume some average leaf-out for all of the species within an ecosystem. Here, we make use of air temperature records and observations of spring leaf phenology collected across dominant groupings of species (dwarf birch shrubs, willow shrubs, other deciduous shrubs, grasses, sedges, and forbs) in arctic and boreal ecosystems in Alaska. We then parameterize a dynamic vegetation model based on these data for four types of tundra ecosystems (heath tundra, shrub tundra, wet sedge tundra, and tussock tundra), as well as ecotonal boreal white spruce forest, and perform model simulations for the years 1970-2100. Over the course of the model simulations, we found changes in ecosystem composition under this new phenology algorithm compared with simulations with the previous phenology algorithm. These changes were the result of the differential timing of leaf-out, as well as the ability for the groupings of species to compete for nitrogen and light availability. Regionally, there were differences in the trends of the carbon pools and fluxes between the new phenology algorithm and the previous phenology algorithm, although these differences depended on the future climate scenario. These findings indicate the importance of leaf phenology data collection by species and across the various ecosystem types within the highly heterogeneous Arctic landscape, and that dynamic vegetation models should consider variation in leaf-out by groupings of species within these ecosystems to make more accurate projections of future plant distributions and carbon cycling in Arctic regions. © 2013 John Wiley & Sons Ltd.
Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge
2008-01-01
This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.
Regional yield predictions of malting barley by remote sensing and ancillary data
NASA Astrophysics Data System (ADS)
Weissteiner, Christof J.; Braun, Matthias; Kuehbauch, Walter
2004-02-01
Yield forecasts are of high interest to the malting and brewing industry in order to allow the most convenient purchasing policy of raw materials. Within this investigation, malting barley yield forecasts (Hordeum vulgare L.) were performed for typical growing regions in South-Western Germany. Multisensoral and multitemporal Remote Sensing data on one hand and ancillary meteorological, agrostatistical, topographical and pedological data on the other hand were used as input data for prediction models, which were based on an empirical-statistical modeling approach. Since spring barley production is depending on acreage and on the yield per area, classification is needed, which was performed by a supervised multitemporal classification algorithm, utilizing optical Remote Sensing data (LANDSAT TM/ETM+). Comparison between a pixel-based and an object-oriented classification algorithm was carried out. The basic version of the yield estimation model was conducted by means of linear correlation of Remote Sensing data (NOAA-AVHRR NDVI), CORINE land cover data and agrostatistical data. In an extended version meteorological data (temperature, precipitation, etc.) and soil data was incorporated. Both, basic and extended prediction systems, led to feasible results, depending on the selection of the time span for NDVI accumulation.
Automatic FDG-PET-based tumor and metastatic lymph node segmentation in cervical cancer
NASA Astrophysics Data System (ADS)
Arbonès, Dídac R.; Jensen, Henrik G.; Loft, Annika; Munck af Rosenschöld, Per; Hansen, Anders Elias; Igel, Christian; Darkner, Sune
2014-03-01
Treatment of cervical cancer, one of the three most commonly diagnosed cancers worldwide, often relies on delineations of the tumour and metastases based on PET imaging using the contrast agent 18F-Fluorodeoxyglucose (FDG). We present a robust automatic algorithm for segmenting the gross tumour volume (GTV) and metastatic lymph nodes in such images. As the cervix is located next to the bladder and FDG is washed out through the urine, the PET-positive GTV and the bladder cannot be easily separated. Our processing pipeline starts with a histogram-based region of interest detection followed by level set segmentation. After that, morphological image operations combined with clustering, region growing, and nearest neighbour labelling allow to remove the bladder and to identify the tumour and metastatic lymph nodes. The proposed method was applied to 125 patients and no failure could be detected by visual inspection. We compared our segmentations with results from manual delineations of corresponding MR and CT images, showing that the detected GTV lays at least 97.5% within the MR/CT delineations. We conclude that the algorithm has a very high potential for substituting the tedious manual delineation of PET positive areas.
Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man
2015-01-01
Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
NASA Astrophysics Data System (ADS)
Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.
2016-11-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.
An improved three-dimension reconstruction method based on guided filter and Delaunay
NASA Astrophysics Data System (ADS)
Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.
From Phenomena to Objects: Segmentation of Fuzzy Objects and its Application to Oceanic Eddies
NASA Astrophysics Data System (ADS)
Wu, Qingling
A challenging image analysis problem that has received limited attention to date is the isolation of fuzzy objects---i.e. those with inherently indeterminate boundaries---from continuous field data. This dissertation seeks to bridge the gap between, on the one hand, the recognized need for Object-Based Image Analysis of fuzzy remotely sensed features, and on the other, the optimization of existing image segmentation techniques for the extraction of more discretely bounded features. Using mesoscale oceanic eddies as a case study of a fuzzy object class evident in Sea Surface Height Anomaly (SSHA) imagery, the dissertation demonstrates firstly, that the widely used region-growing and watershed segmentation techniques can be optimized and made comparable in the absence of ground truth data using the principle of parsimony. However, they both have significant shortcomings, with the region growing procedure creating contour polygons that do not follow the shape of eddies while the watershed technique frequently subdivides eddies or groups together separate eddy objects. Secondly, it was determined that these problems can be remedied by using a novel Non-Euclidian Voronoi (NEV) tessellation technique. NEV is effective in isolating the extrema associated with eddies in SSHA data while using a non-Euclidian cost-distance based procedure (based on cumulative gradients in ocean height) to define the boundaries between fuzzy objects. Using this procedure as the first stage in isolating candidate eddy objects, a novel "region-shrinking" multicriteria eddy identification algorithm was developed that includes consideration of shape and vorticity. Eddies identified by this region-shrinking technique compare favorably with those identified by existing techniques, while simplifying and improving existing automated eddy detection algorithms. However, it also tends to find a larger number of eddies as a result of its ability to separate what other techniques identify as connected eddies. The research presented here is of significance not only to eddy research in oceanography, but also to other areas of Earth System Science for which the automated detection of features lacking rigid boundary definitions is of importance.
Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos
2015-01-01
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685
NASA Astrophysics Data System (ADS)
Moral García, Francisco J.; Rebollo, Francisco J.; Paniagua, Luis L.; García, Abelardo
2014-05-01
Different bioclimatic indices have been proposed to determine the wine suitability in a region. Some of them are related to the air temperature, but the hydric component of climate should also be considered which, in turn, is influenced by the precipitation during the different stages of the grapevine growing and ripening periods. In this work we propose using the information obtained from 10 bioclimatic indices and variables (heliothermal index, HI, cool night index, CI, dryness index, DI, growing season temperature, GST, the Winkler index, WI, September mean thermal amplitude, MTA, annual precipitation, AP, precipitation during flowering, PDF, precipitation before flowering, PBF, and summer precipitation, SP) as inputs in an objective and probabilistic model, the Rasch model, with the aim of integrating the individual effects of them, obtaining the climate data that summarize all main bioclimatic indices which could influence on wine suitability, and utilize the Rasch measures to generate homogeneous climatic zones. The use of the Rasch model to estimate viticultural suitability constitutes a new application of great practical importance, enabling to rationally determine locations in a region where high viticultural potential exists and establishing a ranking of the bioclimatic indices or variables which exerts an important influence on wine suitability in a region. Furthermore, from the measures of viticultural suitability at some locations, estimates can be computed using a geostatistical algorithm, and these estimates can be utilized to map viticultural suitability potential in a region. To illustrate the process, an application to Extremadura, southewestern Spain, is shown. Keywords: Rasch model, bioclimatic indices, GIS.
NASA Astrophysics Data System (ADS)
Moral, Francisco J.; Rebollo, Francisco J.; Paniagua, Luis L.; García, Abelardo; Honorio, Fulgencio
2016-05-01
Different climatic indices have been proposed to determine the wine suitability in a region. Some of them are related to the air temperature, but the hydric component of climate should also be considered which, in turn, is influenced by the precipitation during the different stages of the grapevine growing and ripening periods. In this study, we propose using the information obtained from ten climatic indices [heliothermal index (HI), cool night index (CI), dryness index (DI), growing season temperature (GST), the Winkler index (WI), September mean thermal amplitude (MTA), annual precipitation (AP), precipitation during flowering (PDF), precipitation before flowering (PBF), and summer precipitation (SP)] as inputs in an objective and probabilistic model, the Rasch model, with the aim of integrating the individual effects of them, obtaining the climate data that summarize all main climatic indices, which could influence on wine suitability from a climate viewpoint, and utilizing the Rasch measures to generate homogeneous climatic zones. The use of the Rasch model to estimate viticultural climatic suitability constitutes a new application of great practical importance, enabling to rationally determine locations in a region where high viticultural potential exists and establishing a ranking of the climatic indices which exerts an important influence on wine suitability in a region. Furthermore, from the measures of viticultural climatic suitability at some locations, estimates can be computed using a geostatistical algorithm, and these estimates can be utilized to map viticultural climatic zones in a region. To illustrate the process, an application to Extremadura, southwestern Spain, is shown.
NASA Astrophysics Data System (ADS)
Zhang, Wenyu; Yang, Yushu; Zhang, Shuai; Yu, Dejian; Chen, Yong
2018-05-01
With the growing complexity of customer requirements and the increasing scale of manufacturing services, how to select and combine the single services to meet the complex demand of the customer has become a growing concern. This paper presents a new manufacturing service composition method to solve the multi-objective optimization problem based on quality of service (QoS). The proposed model not only presents different methods for calculating the transportation time and transportation cost under various structures but also solves the three-dimensional composition optimization problem, including service aggregation, service selection, and service scheduling simultaneously. Further, an improved Flower Pollination Algorithm (IFPA) is proposed to solve the three-dimensional composition optimization problem using a matrix-based representation scheme. The mutation operator and crossover operator of the Differential Evolution (DE) algorithm are also used to extend the basic Flower Pollination Algorithm (FPA) to improve its performance. Compared to Genetic Algorithm, DE, and basic FPA, the experimental results confirm that the proposed method demonstrates superior performance than other meta heuristic algorithms and can obtain better manufacturing service composition solutions.
Amazon Rain Forest Classification Using J-ERS-1 SAR Data
NASA Technical Reports Server (NTRS)
Freeman, A.; Kramer, C.; Alves, M.; Chapman, B.
1994-01-01
The Amazon rain forest is a region of the earth that is undergoing rapid change. Man-made disturbance, such as clear cutting for agriculture or mining, is altering the rain forest ecosystem. For many parts of the rain forest, seasonal changes from the wet to the dry season are also significant. Changes in the seasonal cycle of flooding and draining can cause significant alterations in the forest ecosystem.Because much of the Amazon basin is regularly covered by thick clouds, optical and infrared coverage from the LANDSAT and SPOT satellites is sporadic. Imaging radar offers a much better potential for regular monitoring of changes in this region. In particular, the J-ERS-1 satellite carries an L-band HH SAR system, which via an on-board tape recorder, can collect data from almost anywhere on the globe at any time of year.In this paper, we show how J-ERS-1 radar images can be used to accurately classify different forest types (i.e., forest, hill forest, flooded forest), disturbed areas such as clear cuts and urban areas, and river courses in the Amazon basin. J-ERS-1 data has also shown significant differences between the dry and wet season, indicating a strong potential for monitoring seasonal change. The algorithm used to classify J-ERS-1 data is a standard maximum-likelihood classifier, using the radar image local mean and standard deviation of texture as input. Rivers and clear cuts are detected using edge detection and region-growing algorithms. Since this classifier is intended to operate successfully on data taken over the entire Amazon, several options are available to enable the user to modify the algorithm to suit a particular image.
Fortier, Véronique; Levesque, Ives R
2018-06-01
Phase processing impacts the accuracy of quantitative susceptibility mapping (QSM). Techniques for phase unwrapping and background removal have been proposed and demonstrated mostly in brain. In this work, phase processing was evaluated in the context of large susceptibility variations (Δχ) and negligible signal, in particular for susceptibility estimation using the iterative phase replacement (IPR) algorithm. Continuous Laplacian, region-growing, and quality-guided unwrapping were evaluated. For background removal, Laplacian boundary value (LBV), projection onto dipole fields (PDF), sophisticated harmonic artifact reduction for phase data (SHARP), variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP), regularization enabled sophisticated harmonic artifact reduction for phase data (RESHARP), and 3D quadratic polynomial field removal were studied. Each algorithm was quantitatively evaluated in simulation and qualitatively in vivo. Additionally, IPR-QSM maps were produced to evaluate the impact of phase processing on the susceptibility in the context of large Δχ with negligible signal. Quality-guided unwrapping was the most accurate technique, whereas continuous Laplacian performed poorly in this context. All background removal algorithms tested resulted in important phase inaccuracies, suggesting that techniques used for brain do not translate well to situations where large Δχ and no or low signal are expected. LBV produced the smallest errors, followed closely by PDF. Results suggest that quality-guided unwrapping should be preferred, with PDF or LBV for background removal, for QSM in regions with large Δχ and negligible signal. This reduces the susceptibility inaccuracy introduced by phase processing. Accurate background removal remains an open question. Magn Reson Med 79:3103-3113, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Horror Image Recognition Based on Context-Aware Multi-Instance Learning.
Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng
2015-12-01
Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.
Accurate airway segmentation based on intensity structure analysis and graph-cut
NASA Astrophysics Data System (ADS)
Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku
2016-03-01
This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.
NASA Astrophysics Data System (ADS)
Richetti, J.; Ahmad, I.; Aristizabal, F.; Judge, J.
2017-12-01
Determining maize agricultural production under climate variability is valuable to policy makers in Pakistan since maize is the third most produced crop by area after wheat and rice. This study aims to predict the maize production under climate variability. Two-hundred ground truth points of both maize and non-maize land covers were collected from the Faisalabad district during the growing seasons of 2015 and 2016. Landsat-8 images taken in second week of May which correspond spatially and temporally to the local, peak growing season for maize were gathered. For classifying the region training data was constructed for a variety of machine learning algorithms by sampling the second, third, and fourth bands of the Landsat-8 imagery at these reference locations. Cross validation was used for parameter tuning as well as estimating the generalized performances. All the classifiers resulted in overall accuracies of greater than 90% for both years and a support vector machine with a radial basis kernel recorded the maximum accuracy of 97%. The tuned models were used to determine the spatial distribution of maize fields for both growing seasons in the Faisalabad district using parallel processing to improve computation time. The overall classified maize growing area represented 12% difference than that reported by the Crop Reporting Service (CRS) of Punjab Pakistan for both 2015 and 2016. For the agricultural production normalized difference vegetation index from Landsat-8 and climate indicators from ground stations will be used as inputs in a variety of machine learning regression algorithms. The expected results will be compared to actual yield from 64 commercial farms. To verify the impact of climate variability in the maize agricultural production historical climate data from previous 30 years will be used in the developed model to asses the impact of climate variability on the maize production.
Transculturalization of a diabetes-specific nutrition algorithm: Asian application.
Su, Hsiu-Yueh; Tsang, Man-Wo; Huang, Shih-Yi; Mechanick, Jeffrey I; Sheu, Wayne H-H; Marchetti, Albert
2012-04-01
The prevalence of type 2 diabetes (T2D) in Asia is growing at an alarming rate, posing significant clinical and economic risk to health care stakeholders. Commonly, Asian patients with T2D manifest a distinctive combination of characteristics that include earlier disease onset, distinct pathophysiology, syndrome of complications, and shorter life expectancy. Optimizing treatment outcomes for such patients requires a coordinated inclusive care plan and knowledgeable practitioners. Comprehensive management starts with medical nutrition therapy (MNT) in a broader lifestyle modification program. Implementing diabetes-specific MNT in Asia requires high-quality and transparent clinical practice guidelines (CPGs) that are regionally adapted for cultural, ethnic, and socioeconomic factors. Respected CPGs for nutrition and diabetes therapy are available from prestigious medical societies. For cost efficiency and effectiveness, health care authorities can select these CPGs for Asian implementation following abridgement and cultural adaptation that includes: defining nutrition therapy in meaningful ways, selecting lower cutoff values for healthy body mass indices and waist circumferences (WCs), identifying the dietary composition of MNT based on regional availability and preference, and expanding nutrition therapy for concomitant hypertension, dyslipidemia, overweight/obesity, and chronic kidney disease. An international task force of respected health care professionals has contributed to this process. To date, task force members have selected appropriate evidence-based CPGs and simplified them into an algorithm for diabetes-specific nutrition therapy. Following cultural adaptation, Asian and Asian-Indian versions of this algorithmic tool have emerged. The Asian version is presented in this report.
Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen
2013-01-01
The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T a) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T a estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T a based on MODIS land surface temperature (LST) data. The verification results of maximum T a, minimum T a, GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001–2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale. PMID:23365013
Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen
2013-02-01
The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T(a)) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T(a) estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T(a) based on MODIS land surface temperature (LST) data. The verification results of maximum T(a), minimum T(a), GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001-2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale.
Incorporating Edge Information into Best Merge Region-Growing Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Pasolli, Edoardo
2014-01-01
We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.
Zhou, Zhen; Huang, Jingfeng; Wang, Jing; Zhang, Kangyu; Kuang, Zhaomin; Zhong, Shiquan; Song, Xiaodong
2015-01-01
Most areas planted with sugarcane are located in southern China. However, remote sensing of sugarcane has been limited because useable remote sensing data are limited due to the cloudy climate of this region during the growing season and severe spectral mixing with other crops. In this study, we developed a methodology for automatically mapping sugarcane over large areas using time-series middle-resolution remote sensing data. For this purpose, two major techniques were used, the object-oriented method (OOM) and data mining (DM). In addition, time-series Chinese HJ-1 CCD images were obtained during the sugarcane growing period. Image objects were generated using a multi-resolution segmentation algorithm, and DM was implemented using the AdaBoost algorithm, which generated the prediction model. The prediction model was applied to the HJ-1 CCD time-series image objects, and then a map of the sugarcane planting area was produced. The classification accuracy was evaluated using independent field survey sampling points. The confusion matrix analysis showed that the overall classification accuracy reached 93.6% and that the Kappa coefficient was 0.85. Thus, the results showed that this method is feasible, efficient, and applicable for extrapolating the classification of other crops in large areas where the application of high-resolution remote sensing data is impractical due to financial considerations or because qualified images are limited. PMID:26528811
Zhou, Zhen; Huang, Jingfeng; Wang, Jing; Zhang, Kangyu; Kuang, Zhaomin; Zhong, Shiquan; Song, Xiaodong
2015-01-01
Most areas planted with sugarcane are located in southern China. However, remote sensing of sugarcane has been limited because useable remote sensing data are limited due to the cloudy climate of this region during the growing season and severe spectral mixing with other crops. In this study, we developed a methodology for automatically mapping sugarcane over large areas using time-series middle-resolution remote sensing data. For this purpose, two major techniques were used, the object-oriented method (OOM) and data mining (DM). In addition, time-series Chinese HJ-1 CCD images were obtained during the sugarcane growing period. Image objects were generated using a multi-resolution segmentation algorithm, and DM was implemented using the AdaBoost algorithm, which generated the prediction model. The prediction model was applied to the HJ-1 CCD time-series image objects, and then a map of the sugarcane planting area was produced. The classification accuracy was evaluated using independent field survey sampling points. The confusion matrix analysis showed that the overall classification accuracy reached 93.6% and that the Kappa coefficient was 0.85. Thus, the results showed that this method is feasible, efficient, and applicable for extrapolating the classification of other crops in large areas where the application of high-resolution remote sensing data is impractical due to financial considerations or because qualified images are limited.
NASA Technical Reports Server (NTRS)
Instrella, Ron; Chirayath, Ved
2016-01-01
In recent years, there has been a growing interest among biologists in monitoring the short and long term health of the world's coral reefs. The environmental impact of climate change poses a growing threat to these biologically diverse and fragile ecosystems, prompting scientists to use remote sensing platforms and computer vision algorithms to analyze shallow marine systems. In this study, we present a novel method for performing coral segmentation and classification from aerial data collected from small unmanned aerial vehicles (sUAV). Our method uses Fluid Lensing algorithms to remove and exploit strong optical distortions created along the air-fluid boundary to produce cm-scale resolution imagery of the ocean floor at depths up to 5 meters. A 3D model of the reef is reconstructed using structure from motion (SFM) algorithms, and the associated depth information is combined with multidimensional maximum a posteriori (MAP) estimation to separate organic from inorganic material and classify coral morphologies in the Fluid-Lensed transects. In this study, MAP estimation is performed using a set of manually classified 100 x 100 pixel training images to determine the most probable coral classification within an interrogated region of interest. Aerial footage of a coral reef was captured off the coast of American Samoa and used to test our proposed method. 90 x 20 meter transects of the Samoan coastline undergo automated classification and are manually segmented by a marine biologist for comparison, leading to success rates as high as 85%. This method has broad applications for coastal remote sensing, and will provide marine biologists access to large swaths of high resolution, segmented coral imagery.
NASA Astrophysics Data System (ADS)
Instrella, R.; Chirayath, V.
2015-12-01
In recent years, there has been a growing interest among biologists in monitoring the short and long term health of the world's coral reefs. The environmental impact of climate change poses a growing threat to these biologically diverse and fragile ecosystems, prompting scientists to use remote sensing platforms and computer vision algorithms to analyze shallow marine systems. In this study, we present a novel method for performing coral segmentation and classification from aerial data collected from small unmanned aerial vehicles (sUAV). Our method uses Fluid Lensing algorithms to remove and exploit strong optical distortions created along the air-fluid boundary to produce cm-scale resolution imagery of the ocean floor at depths up to 5 meters. A 3D model of the reef is reconstructed using structure from motion (SFM) algorithms, and the associated depth information is combined with multidimensional maximum a posteriori (MAP) estimation to separate organic from inorganic material and classify coral morphologies in the Fluid-Lensed transects. In this study, MAP estimation is performed using a set of manually classified 100 x 100 pixel training images to determine the most probable coral classification within an interrogated region of interest. Aerial footage of a coral reef was captured off the coast of American Samoa and used to test our proposed method. 90 x 20 meter transects of the Samoan coastline undergo automated classification and are manually segmented by a marine biologist for comparison, leading to success rates as high as 85%. This method has broad applications for coastal remote sensing, and will provide marine biologists access to large swaths of high resolution, segmented coral imagery.
2D/3D fetal cardiac dataset segmentation using a deformable model.
Dindoyal, Irving; Lambrou, Tryphon; Deng, Jing; Todd-Pokropek, Andrew
2011-07-01
To segment the fetal heart in order to facilitate the 3D assessment of the cardiac function and structure. Ultrasound acquisition typically results in drop-out artifacts of the chamber walls. The authors outline a level set deformable model to automatically delineate the small fetal cardiac chambers. The level set is penalized from growing into an adjacent cardiac compartment using a novel collision detection term. The region based model allows simultaneous segmentation of all four cardiac chambers from a user defined seed point placed in each chamber. The segmented boundaries are automatically penalized from intersecting at walls with signal dropout. Root mean square errors of the perpendicular distances between the algorithm's delineation and manual tracings are within 2 mm which is less than 10% of the length of a typical fetal heart. The ejection fractions were determined from the 3D datasets. We validate the algorithm using a physical phantom and obtain volumes that are comparable to those from physically determined means. The algorithm segments volumes with an error of within 13% as determined using a physical phantom. Our original work in fetal cardiac segmentation compares automatic and manual tracings to a physical phantom and also measures inter observer variation.
Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.
Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi
2018-03-24
In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.
Zhu, Haitao; Demachi, Kazuyuki; Sekino, Masaki
2011-09-01
Positive contrast imaging methods produce enhanced signal at large magnetic field gradient in magnetic resonance imaging. Several postprocessing algorithms, such as susceptibility gradient mapping and phase gradient mapping methods, have been applied for positive contrast generation to detect the cells targeted by superparamagnetic iron oxide nanoparticles. In the phase gradient mapping methods, smoothness condition has to be satisfied to keep the phase gradient unwrapped. Moreover, there has been no discussion about the truncation artifact associated with the algorithm of differentiation that is performed in k-space by the multiplication with frequency value. In this work, phase gradient methods are discussed by considering the wrapping problem when the smoothness condition is not satisfied. A region-growing unwrapping algorithm is used in the phase gradient image to solve the problem. In order to reduce the truncation artifact, a cosine function is multiplied in the k-space to eliminate the abrupt change at the boundaries. Simulation, phantom and in vivo experimental results demonstrate that the modified phase gradient mapping methods may produce improved positive contrast effects by reducing truncation or wrapping artifacts. Copyright © 2011 Elsevier Inc. All rights reserved.
Rattner, Alexander S.; Guillen, Donna Post; Joshi, Alark; ...
2016-03-17
Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaViewmore » plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. Here, by providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Nagle, Nicholas N; Piburn, Jesse O
As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for detailed information regarding residential energy consumption patterns has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy consumption, the majority of techniques are highly dependent on region-specific data sources and often require building- or dwelling-level details that are not publicly available for many regions in the United States. Furthermore, many existing methods do not account for errors in input data sources and may not accurately reflect inherent uncertainties in modelmore » outputs. We propose an alternative and more general hybrid approach to high-resolution residential electricity consumption modeling by merging a dasymetric model with a complementary machine learning algorithm. The method s flexible data requirement and statistical framework ensure that the model both is applicable to a wide range of regions and considers errors in input data sources.« less
NASA Astrophysics Data System (ADS)
Liao, Chun-Chih; Xiao, Furen; Wong, Jau-Min; Chiang, I.-Jen
Computed tomography (CT) of the brain is preferred study on neurological emergencies. Physicians use CT to diagnose various types of intracranial hematomas, including epidural, subdural and intracerebral hematomas according to their locations and shapes. We propose a novel method that can automatically diagnose intracranial hematomas by combining machine vision and knowledge discovery techniques. The skull on the CT slice is located and the depth of each intracranial pixel is labeled. After normalization of the pixel intensities by their depth, the hyperdense area of intracranial hematoma is segmented with multi-resolution thresholding and region-growing. We then apply C4.5 algorithm to construct a decision tree using the features of the segmented hematoma and the diagnoses made by physicians. The algorithm was evaluated on 48 pathological images treated in a single institute. The two discovered rules closely resemble those used by human experts, and are able to make correct diagnoses in all cases.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170
Test experience on an ultrareliable computer communication network
NASA Technical Reports Server (NTRS)
Abbott, L. W.
1984-01-01
The dispersed sensor processing mesh (DSPM) is an experimental, ultrareliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.
Brain tumor segmentation in MR slices using improved GrowCut algorithm
NASA Astrophysics Data System (ADS)
Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying
2015-12-01
The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.
NASA Astrophysics Data System (ADS)
Teodoro, Ana C.; Araujo, Ricardo
2016-01-01
The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.
NASA Technical Reports Server (NTRS)
Taconet, O.; Benallegue, M.; Vidal, A.; Vidal-Madjar, D.; Prevot, L.; Normand, M.
1993-01-01
The ability of remote sensing for monitoring vegetation density and soil moisture for agricultural applications is extensively studied. In optical bands, vegetation indices (NDVI, WDVI) in visible and near infrared reflectances are related to biophysical quantities as the leaf area index, the biomass. In active microwave bands, the quantitative assessment of crop parameters and soil moisture over agricultural areas by radar multiconfiguration algorithms remains prospective. Furthermore the main results are mostly validated on small test sites, but have still to be demonstrated in an operational way at a regional scale. In this study, a large data set of radar backscattering has been achieved at a regional scale on a French pilot watershed, the Orgeval, along two growing seasons in 1988 and 1989 (mainly wheat and corn). The radar backscattering was provided by the airborne scatterometer ERASME, designed at CRPE, (C and X bands and HH and VV polarizations). Empirical relationships to estimate water crop and soil moisture over wheat in CHH band under actual field conditions and at a watershed scale are investigated. Therefore, the algorithms developed in CHH band are applied for mapping the surface conditions over wheat fields using the AIRSAR and TMS images collected during the MAC EUROPE 1991 experiment. The synergy between optical and microwave bands is analyzed.
Individual Tree Crown Delineation Using Multi-Wavelength Titan LIDAR Data
NASA Astrophysics Data System (ADS)
Naveed, F.; Hu, B.
2017-10-01
The inability to detect the Emerald Ash Borer (EAB) at an early stage has led to the enumerable loss of different species of ash trees. Due to the increasing risk being posed by the EAB, a robust and accurate method is needed for identifying Individual Tree Crowns (ITCs) that are at a risk of being infected or are already diseased. This paper attempts to outline an ITC delineation method that employs airborne multi-spectral Light Detection and Ranging (LiDAR) to accurately delineate tree crowns. The raw LiDAR data were initially pre-processed to generate the Digital Surface Models (DSM) and Digital Elevation Models (DEM) using an iterative progressive TIN (Triangulated Irregular Network) densification method. The DSM and DEM were consequently used for Canopy Height Model (CHM) generation, from which the structural information pertaining to the size and shape of the tree crowns was obtained. The structural information along with the spectral information was used to segment ITCs using a region growing algorithm. The availability of the multi-spectral LiDAR data allows for delineation of crowns that have otherwise homogenous structural characteristics and hence cannot be isolated from the CHM alone. This study exploits the spectral data to derive initial approximations of individual tree tops and consequently grow those regions based on the spectral constraints of the individual trees.
Validation of neural spike sorting algorithms without ground-truth information.
Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F
2016-05-01
The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.
Algorithms for in-season nutrient management in cereals
USDA-ARS?s Scientific Manuscript database
The demand for improved decision making products for cereal production systems has placed added emphasis on using plant sensors in-season, and that incorporate real-time, site specific, growing environments. The objective of this work was to describe validated in-season sensor based algorithms prese...
NASA Astrophysics Data System (ADS)
Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L.
2016-02-01
A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial template model used.
Correlation approach to identify coding regions in DNA sequences
NASA Technical Reports Server (NTRS)
Ossadnik, S. M.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Mantegna, R. N.; Peng, C. K.; Simons, M.; Stanley, H. E.
1994-01-01
Recently, it was observed that noncoding regions of DNA sequences possess long-range power-law correlations, whereas coding regions typically display only short-range correlations. We develop an algorithm based on this finding that enables investigators to perform a statistical analysis on long DNA sequences to locate possible coding regions. The algorithm is particularly successful in predicting the location of lengthy coding regions. For example, for the complete genome of yeast chromosome III (315,344 nucleotides), at least 82% of the predictions correspond to putative coding regions; the algorithm correctly identified all coding regions larger than 3000 nucleotides, 92% of coding regions between 2000 and 3000 nucleotides long, and 79% of coding regions between 1000 and 2000 nucleotides. The predictive ability of this new algorithm supports the claim that there is a fundamental difference in the correlation property between coding and noncoding sequences. This algorithm, which is not species-dependent, can be implemented with other techniques for rapidly and accurately locating relatively long coding regions in genomic sequences.
Data-driven advice for applying machine learning to bioinformatics problems
Olson, Randal S.; La Cava, William; Mustahsan, Zairah; Varik, Akshay; Moore, Jason H.
2017-01-01
As the bioinformatics field grows, it must keep pace not only with new data but with new algorithms. Here we contribute a thorough analysis of 13 state-of-the-art, commonly used machine learning algorithms on a set of 165 publicly available classification problems in order to provide data-driven algorithm recommendations to current researchers. We present a number of statistical and visual comparisons of algorithm performance and quantify the effect of model selection and algorithm tuning for each algorithm and dataset. The analysis culminates in the recommendation of five algorithms with hyperparameters that maximize classifier performance across the tested problems, as well as general guidelines for applying machine learning to supervised classification problems. PMID:29218881
NASA Astrophysics Data System (ADS)
Zhang, J.; Ives, A. R.; Turner, M. G.; Kucharik, C. J.
2017-12-01
Previous studies have identified global agricultural regions where "stagnation" of long-term crop yield increases has occurred. These studies have used a variety of simple statistical methods that often ignore important aspects of time series regression modeling. These methods can lead to differing and contradictory results, which creates uncertainty regarding food security given rapid global population growth. Here, we present a new statistical framework incorporating time series-based algorithms into standard regression models to quantify spatiotemporal yield trends of US maize, soybean, and winter wheat from 1970-2016. Our primary goal was to quantify spatial differences in yield trends for these three crops using USDA county level data. This information was used to identify regions experiencing the largest changes in the rate of yield increases over time, and to determine whether abrupt shifts in the rate of yield increases have occurred. Although crop yields continue to increase in most maize-, soybean-, and winter wheat-growing areas, yield increases have stagnated in some key agricultural regions during the most recent 15 to 16 years: some maize-growing areas, except for the northern Great Plains, have shown a significant trend towards smaller annual yield increases for maize; soybean has maintained an consistent long-term yield gains in the Northern Great Plains, the Midwest, and southeast US, but has experienced a shift to smaller annual increases in other regions; winter wheat maintained a moderate annual increase in eastern South Dakota and eastern US locations, but showed a decline in the magnitude of annual increases across the central Great Plains and western US regions. Our results suggest that there were abrupt shifts in the rate of annual yield increases in a variety of US regions among the three crops. The framework presented here can be broadly applied to additional yield trend analyses for different crops and regions of the Earth.
Test experience on an ultrareliable computer communication network
NASA Technical Reports Server (NTRS)
Abbott, L. W.
1984-01-01
The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m(-2) d(-1) and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m(-2) d(-1) and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution.
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m-2 d-1 and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m-2 d-1 and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution. PMID:27088356
A system for automatic aorta sections measurements on chest CT
NASA Astrophysics Data System (ADS)
Pfeffer, Yitzchak; Mayer, Arnaldo; Zholkover, Adi; Konen, Eli
2016-03-01
A new method is proposed for caliber measurement of the ascending aorta (AA) and descending aorta (DA). A key component of the method is the automatic detection of the carina, as an anatomical landmark around which an axial volume of interest (VOI) can be defined to observe the aortic caliber. For each slice in the VOI, a linear profile line connecting the AA with the DA is found by pattern matching on the underlying intensity profile. Next, the aortic center position is found using Hough transform on the best linear segment candidate. Finally, region growing around the center provides an accurate segmentation and caliber measurement. We evaluated the algorithm on 113 sequential chest CT scans, slice thickness of 0.75 - 3.75mm, 90 with contrast agent injected. The algorithm success rates were computed as the percentage of scans in which the center of the AA was found. Automated measurements of AA caliber were compared with independent measurements of two experienced chest radiologists, comparing the absolute difference between the two radiologists with the absolute difference between the algorithm and each of the radiologists. The measurement stability was demonstrated by computing the STD of the absolute difference between the radiologists, and between the algorithm and the radiologists. Results: Success rates of 93% and 74% were achieved, for contrast injected cases and non-contrast cases, respectively. These results indicate that the algorithm can be robust in large variability of image quality, such as the cases in a realworld clinical setting. The average absolute difference between the algorithm and the radiologists was 1.85mm, lower than the average absolute difference between the radiologists, which was 2.1mm. The STD of the absolute difference between the algorithm and the radiologists was 1.5mm vs 1.6mm between the two radiologists. These results demonstrate the clinical relevance of the algorithm measurements.
Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis
Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.
2016-01-01
Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498
Derivation of a regional active-optical reflectance sensor corn algorithm
USDA-ARS?s Scientific Manuscript database
Active-optical reflectance sensor (AORS) algorithms developed for in-season corn (Zea mays L.) N management have traditionally been derived using sub-regional scale information. However, studies have shown these previously developed AORS algorithms are not consistently accurate when used on a region...
Realizing Mitigation Efficiency of European Commercial Forests by Climate Smart Forestry.
Yousefpour, Rasoul; Augustynczik, Andrey Lessa Derci; Reyer, Christopher P O; Lasch-Born, Petra; Suckow, Felicitas; Hanewinkel, Marc
2018-01-10
European temperate and boreal forests sequester up to 12% of Europe's annual carbon emissions. Forest carbon density can be manipulated through management to maximize its climate mitigation potential, and fast-growing tree species may contribute the most to Climate Smart Forestry (CSF) compared to slow-growing hardwoods. This type of CSF takes into account not only forest resource potentials in sequestering carbon, but also the economic impact of regional forest products and discounts both variables over time. We used the process-based forest model 4 C to simulate European commercial forests' growth conditions and coupled it with an optimization algorithm to simulate the implementation of CSF for 18 European countries encompassing 68.3 million ha of forest (42.4% of total EU-28 forest area). We found a European CSF policy that could sequester 7.3-11.1 billion tons of carbon, projected to be worth 103 to 141 billion euros in the 21st century. An efficient CSF policy would allocate carbon sequestration to European countries with a lower wood price, lower labor costs, high harvest costs, or a mixture thereof to increase its economic efficiency. This policy prioritized the allocation of mitigation efforts to northern, eastern and central European countries and favored fast growing conifers Picea abies and Pinus sylvestris to broadleaves Fagus sylvatica and Quercus species.
NASA Astrophysics Data System (ADS)
Dang, Nguyen Tuan; Akai-Kasada, Megumi; Asai, Tetsuya; Saito, Akira; Kuwahara, Yuji; Hokkaido University Collaboration
2015-03-01
Machine learning using the artificial neuron network research is supposed to be the best way to understand how the human brain trains itself to process information. In this study, we have successfully developed the programs using supervised machine learning algorithm. However, these supervised learning processes for the neuron network required the very strong computing configuration. Derivation from the necessity of increasing in computing ability and in reduction of power consumption, accelerator circuits become critical. To develop such accelerator circuits using supervised machine learning algorithm, conducting polymer micro/nanowires growing process was realized and applied as a synaptic weigh controller. In this work, high conductivity Polypyrrole (PPy) and Poly (3, 4 - ethylenedioxythiophene) PEDOT wires were potentiostatically grown crosslinking the designated electrodes, which were prefabricated by lithography, when appropriate square wave AC voltage and appropriate frequency were applied. Micro/nanowire growing process emulated the neurotransmitter release process of synapses inside a biological neuron and wire's resistance variation during the growing process was preferred to as the variation of synaptic weigh in machine learning algorithm. In a cooperation with Graduate School of Information Science and Technology, Hokkaido University.
A region-based segmentation method for ultrasound images in HIFU therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Dong, E-mail: dongz@whu.edu.cn; Liu, Yu; Yang, Yan
Purpose: Precisely and efficiently locating a tumor with less manual intervention in ultrasound-guided high-intensity focused ultrasound (HIFU) therapy is one of the keys to guaranteeing the therapeutic result and improving the efficiency of the treatment. The segmentation of ultrasound images has always been difficult due to the influences of speckle, acoustic shadows, and signal attenuation as well as the variety of tumor appearance. The quality of HIFU guidance images is even poorer than that of conventional diagnostic ultrasound images because the ultrasonic probe used for HIFU guidance usually obtains images without making contact with the patient’s body. Therefore, the segmentationmore » becomes more difficult. To solve the segmentation problem of ultrasound guidance image in the treatment planning procedure for HIFU therapy, a novel region-based segmentation method for uterine fibroids in HIFU guidance images is proposed. Methods: Tumor partitioning in HIFU guidance image without manual intervention is achieved by a region-based split-and-merge framework. A new iterative multiple region growing algorithm is proposed to first split the image into homogenous regions (superpixels). The features extracted within these homogenous regions will be more stable than those extracted within the conventional neighborhood of a pixel. The split regions are then merged by a superpixel-based adaptive spectral clustering algorithm. To ensure the superpixels that belong to the same tumor can be clustered together in the merging process, a particular construction strategy for the similarity matrix is adopted for the spectral clustering, and the similarity matrix is constructed by taking advantage of a combination of specifically selected first-order and second-order texture features computed from the gray levels and the gray level co-occurrence matrixes, respectively. The tumor region is picked out automatically from the background regions by an algorithm according to a priori information about the tumor position, shape, and size. Additionally, an appropriate cluster number for spectral clustering can be determined by the same algorithm, thus the automatic segmentation of the tumor region is achieved. Results: To evaluate the performance of the proposed method, 50 uterine fibroid ultrasound images from different patients receiving HIFU therapy were segmented, and the obtained tumor contours were compared with those delineated by an experienced radiologist. For area-based evaluation results, the mean values of the true positive ratio, the false positive ratio, and the similarity were 94.42%, 4.71%, and 90.21%, respectively, and the corresponding standard deviations were 2.54%, 3.12%, and 3.50%, respectively. For distance-based evaluation results, the mean values of the normalized Hausdorff distance and the normalized mean absolute distance were 4.93% and 0.90%, respectively, and the corresponding standard deviations were 2.22% and 0.34%, respectively. The running time of the segmentation process was 12.9 s for a 318 × 333 (pixels) image. Conclusions: Experiments show that the proposed method can segment the tumor region accurately and efficiently with less manual intervention, which provides for the possibility of automatic segmentation and real-time guidance in HIFU therapy.« less
NASA Astrophysics Data System (ADS)
Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe
2017-04-01
In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of mass movements and other environmental sources at the local, regional and even global scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasquier, David; Lacornerie, Thomas; Vermandel, Maximilien
Purpose: Target-volume and organ-at-risk delineation is a time-consuming task in radiotherapy planning. The development of automated segmentation tools remains problematic, because of pelvic organ shape variability. We evaluate a three-dimensional (3D), deformable-model approach and a seeded region-growing algorithm for automatic delineation of the prostate and organs-at-risk on magnetic resonance images. Methods and Materials: Manual and automatic delineation were compared in 24 patients using a sagittal T2-weighted (T2-w) turbo spin echo (TSE) sequence and an axial T1-weighted (T1-w) 3D fast-field echo (FFE) or TSE sequence. For automatic prostate delineation, an organ model-based method was used. Prostates without seminal vesicles were delineatedmore » as the clinical target volume (CTV). For automatic bladder and rectum delineation, a seeded region-growing method was used. Manual contouring was considered the reference method. The following parameters were measured: volume ratio (Vr) (automatic/manual), volume overlap (Vo) (ratio of the volume of intersection to the volume of union; optimal value = 1), and correctly delineated volume (Vc) (percent ratio of the volume of intersection to the manually defined volume; optimal value 100). Results: For the CTV, the Vr, Vo, and Vc were 1.13 ({+-}0.1 SD), 0.78 ({+-}0.05 SD), and 94.75 ({+-}3.3 SD), respectively. For the rectum, the Vr, Vo, and Vc were 0.97 ({+-}0.1 SD), 0.78 ({+-}0.06 SD), and 86.52 ({+-}5 SD), respectively. For the bladder, the Vr, Vo, and Vc were 0.95 ({+-}0.03 SD), 0.88 ({+-}0.03 SD), and 91.29 ({+-}3.1 SD), respectively. Conclusions: Our results show that the organ-model method is robust, and results in reproducible prostate segmentation with minor interactive corrections. For automatic bladder and rectum delineation, magnetic resonance imaging soft-tissue contrast enables the use of region-growing methods.« less
The Quantitative Analysis of User Behavior Online - Data, Models and Algorithms
NASA Astrophysics Data System (ADS)
Raghavan, Prabhakar
By blending principles from mechanism design, algorithms, machine learning and massive distributed computing, the search industry has become good at optimizing monetization on sound scientific principles. This represents a successful and growing partnership between computer science and microeconomics. When it comes to understanding how online users respond to the content and experiences presented to them, we have more of a lacuna in the collaboration between computer science and certain social sciences. We will use a concrete technical example from image search results presentation, developing in the process some algorithmic and machine learning problems of interest in their own right. We then use this example to motivate the kinds of studies that need to grow between computer science and the social sciences; a critical element of this is the need to blend large-scale data analysis with smaller-scale eye-tracking and "individualized" lab studies.
IMM tracking of a theater ballistic missile during boost phase
NASA Astrophysics Data System (ADS)
Hutchins, Robert G.; San Jose, Anthony
1998-09-01
Since the SCUD launches in the Gulf War, theater ballistic missile (TBM) systems have become a growing concern for the US military. Detection, tracking and engagement during boost phase or shortly after booster cutoff are goals that grow in importance with the proliferation of weapons of mass destruction. This paper addresses the performance of tracking algorithms for TBMs during boost phase and across the transition to ballistic flight. Three families of tracking algorithms are examined: alpha-beta-gamma trackers, Kalman-based trackers, and the interactive multiple model (IMM) tracker. In addition, a variation on the IMM to include prior knowledge of a booster cutoff parameter is examined. Simulated data is used to compare algorithms. Also, the IMM tracker is run on an actual ballistic missile trajectory. Results indicate that IMM trackers show significant advantage in tracking through the model transition represented by booster cutoff.
NASA Astrophysics Data System (ADS)
Kodera, Yuki
2018-01-01
Large earthquakes with long rupture durations emit P wave energy throughout the rupture period. Incorporating late-onset P waves into earthquake early warning (EEW) algorithms could contribute to robust predictions of strong ground motion. Here I describe a technique to detect in real time P waves from growing ruptures to improve the timeliness of an EEW algorithm based on seismic wavefield estimation. The proposed P wave detector, which employs a simple polarization analysis, successfully detected P waves from strong motion generation areas of the 2011 Mw 9.0 Tohoku-oki earthquake rupture. An analysis using 23 large (M ≥ 7) events from Japan confirmed that seismic intensity predictions based on the P wave detector significantly increased lead times without appreciably decreasing the prediction accuracy. P waves from growing ruptures, being one of the fastest carriers of information on ongoing rupture development, have the potential to improve the performance of EEW systems.
Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dexin; Yang, Liuqing; Florita, Anthony
The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the helpmore » of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.« less
Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dexin; Yang, Liuqing; Florita, Anthony
The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the helpmore » of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.« less
The Roles of Suprasegmental Features in Predicting English Oral Proficiency with an Automated System
ERIC Educational Resources Information Center
Kang, Okim; Johnson, David
2018-01-01
Suprasegmental features have received growing attention in the field of oral assessment. In this article we describe a set of computer algorithms that automatically scores the oral proficiency of non-native speakers using unconstrained English speech. The algorithms employ machine learning and 11 suprasegmental measures divided into four groups…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright
Finding and identifying Cryptography is a growing concern in the malware analysis community. In this paper, a heuristic method for determining the likelihood that a given function contains a cryptographic algorithm is discussed and the results of applying this method in various environments is shown. The algorithm is based on frequency analysis of opcodes that make up each function within a binary.
Dust Storm Feature Identification and Tracking from 4D Simulation Data
NASA Astrophysics Data System (ADS)
Yu, M.; Yang, C. P.
2016-12-01
Dust storms cause significant damage to health, property and the environment worldwide every year. To help mitigate the damage, dust forecasting models simulate and predict upcoming dust events, providing valuable information to scientists, decision makers, and the public. Normally, the model simulations are conducted in four-dimensions (i.e., latitude, longitude, elevation and time) and represent three-dimensional (3D), spatial heterogeneous features of the storm and its evolution over space and time. This research investigates and proposes an automatic multi-threshold, region-growing based identification algorithm to identify critical dust storm features, and track the evolution process of dust storm events through space and time. In addition, a spatiotemporal data model is proposed, which can support the characterization and representation of dust storm events and their dynamic patterns. Quantitative and qualitative evaluations for the algorithm are conducted to test the sensitivity, and capability of identify and track dust storm events. This study has the potential to assist a better early warning system for decision-makers and the public, thus making hazard mitigation plans more effective.
Strongly enhanced thermal transport in a lightly doped Mott insulator at low temperature.
Zlatić, V; Freericks, J K
2012-12-28
We show how a lightly doped Mott insulator has hugely enhanced electronic thermal transport at low temperature. It displays universal behavior independent of the interaction strength when the carriers can be treated as nondegenerate fermions and a nonuniversal "crossover" region where the Lorenz number grows to large values, while still maintaining a large thermoelectric figure of merit. The electron dynamics are described by the Falicov-Kimball model which is solved for arbitrary large on-site correlation with a dynamical mean-field theory algorithm on a Bethe lattice. We show how these results are generic for lightly doped Mott insulators as long as the renormalized Fermi liquid scale is pushed to very low temperature and the system is not magnetically ordered.
Optimized programming algorithm for cylindrical and directional deep brain stimulation electrodes.
Anderson, Daria Nesterovich; Osting, Braxton; Vorwerk, Johannes; Dorval, Alan D; Butson, Christopher R
2018-04-01
Deep brain stimulation (DBS) is a growing treatment option for movement and psychiatric disorders. As DBS technology moves toward directional leads with increased numbers of smaller electrode contacts, trial-and-error methods of manual DBS programming are becoming too time-consuming for clinical feasibility. We propose an algorithm to automate DBS programming in near real-time for a wide range of DBS lead designs. Magnetic resonance imaging and diffusion tensor imaging are used to build finite element models that include anisotropic conductivity. The algorithm maximizes activation of target tissue and utilizes the Hessian matrix of the electric potential to approximate activation of neurons in all directions. We demonstrate our algorithm's ability in an example programming case that targets the subthalamic nucleus (STN) for the treatment of Parkinson's disease for three lead designs: the Medtronic 3389 (four cylindrical contacts), the direct STNAcute (two cylindrical contacts, six directional contacts), and the Medtronic-Sapiens lead (40 directional contacts). The optimization algorithm returns patient-specific contact configurations in near real-time-less than 10 s for even the most complex leads. When the lead was placed centrally in the target STN, the directional leads were able to activate over 50% of the region, whereas the Medtronic 3389 could activate only 40%. When the lead was placed 2 mm lateral to the target, the directional leads performed as well as they did in the central position, but the Medtronic 3389 activated only 2.9% of the STN. This DBS programming algorithm can be applied to cylindrical electrodes as well as novel directional leads that are too complex with modern technology to be manually programmed. This algorithm may reduce clinical programming time and encourage the use of directional leads, since they activate a larger volume of the target area than cylindrical electrodes in central and off-target lead placements.
NASA Astrophysics Data System (ADS)
Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.
Mallette, Jennifer R; Casale, John F; Jordan, James; Morello, David R; Beyer, Paul M
2016-03-23
Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses ((2)H and (18)O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions.
NASA Astrophysics Data System (ADS)
Mallette, Jennifer R.; Casale, John F.; Jordan, James; Morello, David R.; Beyer, Paul M.
2016-03-01
Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses (2H and 18O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions.
NASA Astrophysics Data System (ADS)
Sun, Min; Chen, Xinjian; Zhang, Zhiqiang; Ma, Chiyuan
2017-02-01
Accurate volume measurements of pituitary adenoma are important to the diagnosis and treatment for this kind of sellar tumor. The pituitary adenomas have different pathological representations and various shapes. Particularly, in the case of infiltrating to surrounding soft tissues, they present similar intensities and indistinct boundary in T1-weighted (T1W) magnetic resonance (MR) images. Then the extraction of pituitary adenoma from MR images is still a challenging task. In this paper, we propose an interactive method to segment the pituitary adenoma from brain MR data, by combining graph cuts based active contour model (GCACM) and random walk algorithm. By using the GCACM method, the segmentation task is formulated as an energy minimization problem by a hybrid active contour model (ACM), and then the problem is solved by the graph cuts method. The region-based term in the hybrid ACM considers the local image intensities as described by Gaussian distributions with different means and variances, expressed as maximum a posteriori probability (MAP). Random walk is utilized as an initialization tool to provide initialized surface for GCACM. The proposed method is evaluated on the three-dimensional (3-D) T1W MR data of 23 patients and compared with the standard graph cuts method, the random walk method, the hybrid ACM method, a GCACM method which considers global mean intensity in region forces, and a competitive region-growing based GrowCut method planted in 3D Slicer. Based on the experimental results, the proposed method is superior to those methods.
An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing
NASA Astrophysics Data System (ADS)
Zhao, Yunji; Pei, Hailong
In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Narouze, Samer N; Provenzano, David; Peng, Philip; Eichenberger, Urs; Lee, Sang Chul; Nicholls, Barry; Moriggl, Bernhard
2012-01-01
The use of ultrasound in pain medicine for interventional axial, nonaxial, and musculoskeletal pain procedures is rapidly evolving and growing. Because of the lack of specialty-specific guidelines for ultrasonography in pain medicine, an international collaborative effort consisting of members of the Special Interest Group on Ultrasonography in Pain Medicine from the American Society of Regional Anesthesia and Pain Medicine, the European Society of Regional Anaesthesia and Pain Therapy, and the Asian Australasian Federation of Pain Societies developed the following recommendations for education and training in ultrasound-guided interventional pain procedures. The purpose of these recommendations is to define the required skills for performing ultrasound-guided pain procedures, the processes for appropriate education, and training and quality improvement. Training algorithms are outlined for practice- and fellowship-based pathways. The previously published American Society of Regional Anesthesia and Pain Medicine and European Society of Regional Anaesthesia and Pain Therapy education and teaching recommendations for ultrasound-guided regional anesthesia served as a foundation for the pain medicine recommendations. Although the decision to grant ultrasound privileges occurs at the institutional level, the committee recommends that the training guidelines outlined in this document serve as the foundation for educational training and the advancement of the practice of ultrasonography in pain medicine.
An extensive assessment of network alignment algorithms for comparison of brain connectomes.
Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario
2017-06-06
Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.
Hyperspectral Remote Sensing of Terrestrial Ecosystem Productivity from ISS
NASA Astrophysics Data System (ADS)
Huemmrich, K. F.; Campbell, P. K. E.; Gao, B. C.; Flanagan, L. B.; Goulden, M.
2017-12-01
Data from the Hyperspectral Imager for Coastal Ocean (HICO), mounted on the International Space Station (ISS), were used to develop and test algorithms for remotely retrieving ecosystem productivity. The ISS orbit introduces both limitations and opportunities for observing ecosystem dynamics. Twenty six HICO images were used from four study sites representing different vegetation types: grasslands, shrubland, and forest. Gross ecosystem production (GEP) data from eddy covariance were matched with HICO-derived spectra. Multiple algorithms were successful relating spectral reflectance with GEP, including: Spectral Vegetation Indices (SVI), SVI in a light use efficiency model framework, spectral shape characteristics through spectral derivatives and absorption feature analysis, and statistical models leading to Multiband Hyperspectral Indices (MHI) from stepwise regressions and Partial Least Squares Regression (PLSR). Algorithms were able to achieve r2 better than 0.7 for both GEP at the overpass time and daily GEP. These algorithms were successful using a diverse set of observations combining data from multiple years, multiple times during growing season, different times of day, with different view angles, and different vegetation types. The demonstrated robustness of the algorithms presented in this study over these conditions provides some confidence in mapping spatial patterns of GEP, describing variability within fields as well as the regional patterns based only on spectral reflectance information. The ISS orbit provides periods with multiple observations collected at different times of the day within a period of a few days. Diurnal GEP patterns were estimated comparing the half-hourly average GEP from the flux tower against HICO estimates of GEP (r2=0.87) if morning, midday, and afternoon observations were available for average fluxes in the time period.
Supervisory Power Management Control Algorithms for Hybrid Electric Vehicles. A Survey
Malikopoulos, Andreas
2014-03-31
The growing necessity for environmentally benign hybrid propulsion systems has led to the development of advanced power management control algorithms to maximize fuel economy and minimize pollutant emissions. This paper surveys the control algorithms for hybrid electric vehicles (HEVs) and plug-in HEVs (PHEVs) that have been reported in the literature to date. The exposition ranges from parallel, series, and power split HEVs and PHEVs and includes a classification of the algorithms in terms of their implementation and the chronological order of their appearance. Remaining challenges and potential future research directions are also discussed.
An efficient parallel algorithm for the solution of a tridiagonal linear system of equations
NASA Technical Reports Server (NTRS)
Stone, H. S.
1971-01-01
Tridiagonal linear systems of equations are solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computations on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log sub 2 N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.
A traveling-salesman-based approach to aircraft scheduling in the terminal area
NASA Technical Reports Server (NTRS)
Luenberger, Robert A.
1988-01-01
An efficient algorithm is presented, based on the well-known algorithm for the traveling salesman problem, for scheduling aircraft arrivals into major terminal areas. The algorithm permits, but strictly limits, reassigning an aircraft from its initial position in the landing order. This limitation is needed so that no aircraft or aircraft category is unduly penalized. Results indicate, for the mix of arrivals investigated, a potential increase in capacity in the 3 to 5 percent range. Furthermore, it is shown that the computation time for the algorithm grows only linearly with problem size.
Beyond the "c" and the "x": Learning with Algorithms in Massive Open Online Courses (MOOCs)
ERIC Educational Resources Information Center
Knox, Jeremy
2018-01-01
This article examines how algorithms are shaping student learning in massive open online courses (MOOCs). Following the dramatic rise of MOOC platform organisations in 2012, over 4,500 MOOCs have been offered to date, in increasingly diverse languages, and with a growing requirement for fees. However, discussions of "learning" in MOOCs…
Chen, Xiaoqiu; Tian, Youhua; Xu, Lin
2015-10-01
Using leaf unfolding and leaf coloration data of a widely distributed herbaceous species, Taraxacum mongolicum, we detected linear trend and temperature response of the growing season at 52 stations from 1990 to 2009. Across the research region, the mean growing season beginning date marginal significantly advanced at a rate of -2.1 days per decade, while the mean growing season end date was significantly delayed at a rate of 3.1 days per decade. The mean growing season length was significantly prolonged at a rate of 5.1 days per decade. Over the 52 stations, linear trends of the beginning date correlate negatively with linear trends of spring temperature, whereas linear trends of the end date and length correlate positively with linear trends of autumn temperature and annual mean temperature. Moreover, the growing season linear trends are also closely related to the growing season responses to temperature and geographic coordinates plus elevation. Regarding growing season responses to temperature, a 1 °C increase in regional mean spring temperature results in an advancement of 2.1 days in regional mean growing season beginning date, and a 1 °C increase in regional mean autumn temperature causes a delay of 2.3 days in regional mean growing season end date. A 1 °C increase in regional annual mean temperature induces an extension of 8.7 days in regional mean growing season length. Over the 52 stations, response of the beginning date to spring temperature depends mainly on local annual mean temperature and geographic coordinates plus elevation. Namely, a 1 °C increase in spring temperature induces a larger advancement of the beginning date at warmer locations with lower latitudes and further west longitudes than at colder locations with higher latitudes and further east longitudes, while a 1 °C increase in spring temperature causes a larger advancement of the beginning date at higher than at lower elevations.
NASA Astrophysics Data System (ADS)
Chen, Xiaoqiu; Tian, Youhua; Xu, Lin
2015-10-01
Using leaf unfolding and leaf coloration data of a widely distributed herbaceous species, Taraxacum mongolicum, we detected linear trend and temperature response of the growing season at 52 stations from 1990 to 2009. Across the research region, the mean growing season beginning date marginal significantly advanced at a rate of -2.1 days per decade, while the mean growing season end date was significantly delayed at a rate of 3.1 days per decade. The mean growing season length was significantly prolonged at a rate of 5.1 days per decade. Over the 52 stations, linear trends of the beginning date correlate negatively with linear trends of spring temperature, whereas linear trends of the end date and length correlate positively with linear trends of autumn temperature and annual mean temperature. Moreover, the growing season linear trends are also closely related to the growing season responses to temperature and geographic coordinates plus elevation. Regarding growing season responses to temperature, a 1 °C increase in regional mean spring temperature results in an advancement of 2.1 days in regional mean growing season beginning date, and a 1 °C increase in regional mean autumn temperature causes a delay of 2.3 days in regional mean growing season end date. A 1 °C increase in regional annual mean temperature induces an extension of 8.7 days in regional mean growing season length. Over the 52 stations, response of the beginning date to spring temperature depends mainly on local annual mean temperature and geographic coordinates plus elevation. Namely, a 1 °C increase in spring temperature induces a larger advancement of the beginning date at warmer locations with lower latitudes and further west longitudes than at colder locations with higher latitudes and further east longitudes, while a 1 °C increase in spring temperature causes a larger advancement of the beginning date at higher than at lower elevations.
NASA Astrophysics Data System (ADS)
You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-01-01
Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.
Airport Flight Departure Delay Model on Improved BN Structure Learning
NASA Astrophysics Data System (ADS)
Cao, Weidong; Fang, Xiangnong
An high score prior genetic simulated annealing Bayesian network structure learning algorithm (HSPGSA) by combining genetic algorithm(GA) with simulated annealing algorithm(SAA) is developed. The new algorithm provides not only with strong global search capability of GA, but also with strong local hill climb search capability of SAA. The structure with the highest score is prior selected. In the mean time, structures with lower score are also could be choice. It can avoid efficiently prematurity problem by higher score individual wrong direct growing population. Algorithm is applied to flight departure delays analysis in a large hub airport. Based on the flight data a BN model is created. Experiments show that parameters learning can reflect departure delay.
NASA Astrophysics Data System (ADS)
Zhileykin, M. M.; Kotiev, G. O.; Nagatsev, M. V.
2018-02-01
In order to meet the growing mobility requirements for the wheeled vehicles on all types of terrain the engineers have to develop a large number of specialized control algorithms for the multi-axle wheeled vehicle (MWV) suspension improving such qualities as ride comfort, handling and stability. The authors have developed an adaptive algorithm of the dynamic damping of the MVW body oscillations. The algorithm provides high ride comfort and high mobility of the vehicle. The article discloses a method for synthesis of an adaptive dynamic continuous algorithm of the MVW body oscillation damping and provides simulation results proving high efficiency of the developed control algorithm.
A serendipitous survey of prediction algorithms for amyloidogenicity
Roland, Bartholomew P.; Kodali, Ravindra; Mishra, Rakesh; Wetzel, Ronald
2014-01-01
SUMMARY The 17- amino acid N-terminal segment of the Huntingtin protein, httNT, grows into stable α-helix rich oligomeric aggregates when incubated under physiological conditions. We examined 15 scrambled sequence versions of an httNT peptide for their stabilities against aggregation in aqueous solution at low micromolar concentration and physiological conditions. Surprisingly, given their derivation from a sequence that readily assembles into highly stable α-helical aggregates that fail to convert into β-structure, we found that three of these scrambled peptides rapidly grow into amyloid-like fibrils, while two others also develop amyloid somewhat more slowly. The other 10 scrambled peptides do not detectibly form any aggregates after 100 hrs incubation under these conditions. We then analyzed these sequences using four previously described algorithms for predicting the tendencies of peptides to grow into amyloid or other β-aggregates. We found that these algorithms – Zyggregator, Tango, Waltz and Zipper – varied greatly in the number of sequences predicted to be amyloidogenic and in their abilities to correctly identify the amyloid forming members of scrambled peptide collection. The results are discussed in the context of a review of the sequence and structural factors currently thought to be important in determining amyloid formation kinetics and thermodynamics. PMID:23893755
New segmentation-based tone mapping algorithm for high dynamic range image
NASA Astrophysics Data System (ADS)
Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong
2017-07-01
The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.
Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A
2014-01-01
This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.
Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.
2014-01-01
This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860
Mallette, Jennifer R.; Casale, John F.; Jordan, James; Morello, David R.; Beyer, Paul M.
2016-01-01
Previously, geo-sourcing to five major coca growing regions within South America was accomplished. However, the expansion of coca cultivation throughout South America made sub-regional origin determinations increasingly difficult. The former methodology was recently enhanced with additional stable isotope analyses (2H and 18O) to fully characterize cocaine due to the varying environmental conditions in which the coca was grown. An improved data analysis method was implemented with the combination of machine learning and multivariate statistical analysis methods to provide further partitioning between growing regions. Here, we show how the combination of trace cocaine alkaloids, stable isotopes, and multivariate statistical analyses can be used to classify illicit cocaine as originating from one of 19 growing regions within South America. The data obtained through this approach can be used to describe current coca cultivation and production trends, highlight trafficking routes, as well as identify new coca growing regions. PMID:27006288
Computational study of RNA folding kinetics and thermodynamics
NASA Astrophysics Data System (ADS)
Morgan, Steven Robert
RNA in its many forms is involved in the processes of protein manufacture, gene splicing, catalysis and gene regulation. It is also the store of genetic information in some viruses. The function of the RNA is determined by its structure, and it is the purpose of this thesis to investigate kinetic and thermodynamic properties of RNA secondary structures in order to obtain a better understanding of their formation and function. Our main tenet is that kinetic formation of RNA structure is necessary to explain features found in natural RNA structures, as well as aspects of the biological function of RNA. Firstly we show that examination of the energies of fragments of RNA secondary structure provides evidence for kinetic formation of structure. Local regions of RNA of length less than about 100 nucleotides adopt a conformation with energy near or equal to the minimum possible for those regions, whilst the energies of larger domains are much further from the their respective minima. This is consistent with the patterns that would be expected if RNA structure is folded Idneticatic during transcription. A Monte-Carlo algorithm is then used to model the kinetic folding of RNA during transcriptional growth. The algorithm is capable of finding the correct structure of a natural RNA for which the minimum free energy approach is unsuccessful. In the viral phage MS2 Idneticatic formed RNA structure plays an important role in the regulation of gene expression. The folding algorithm can accurately model this by IdneticaUy controlling access to the gene initiation region. The algorithm is also successfully used to model the control of replication in the ColEl plasmid. Taking a different approach, we then use a simplified model of RNA secondary structure to investigate the size of energy barriers between degenerate minimum energy structures. This model has much in common with physical systems such as spin glasses, and in fact shows similar behaviour to these systems in that energy barriers between structures grow quickly with the length of the RNA sequence. These barriers will serve to trap RNA in non-optimal structures. Together these studies demonstrate the necessity of studying RNA secondary structure from a kinetic point of view, and provide clear directions in which further work may be taken. Kinetic models of RNA secondary structure should continue to prove useful in modelling the structure and function of RNA.
A street rubbish detection algorithm based on Sift and RCNN
NASA Astrophysics Data System (ADS)
Yu, XiPeng; Chen, Zhong; Zhang, Shuo; Zhang, Ting
2018-02-01
This paper presents a street rubbish detection algorithm based on image registration with Sift feature and RCNN. Firstly, obtain the rubbish region proposal on the real-time street image and set up the CNN convolution neural network trained by the rubbish samples set consists of rubbish and non-rubbish images; Secondly, for every clean street image, obtain the Sift feature and do image registration with the real-time street image to obtain the differential image, the differential image filters a lot of background information, obtain the rubbish region proposal rect where the rubbish may appear on the differential image by the selective search algorithm. Then, the CNN model is used to detect the image pixel data in each of the region proposal on the real-time street image. According to the output vector of the CNN, it is judged whether the rubbish is in the region proposal or not. If it is rubbish, the region proposal on the real-time street image is marked. This algorithm avoids the large number of false detection caused by the detection on the whole image because the CNN is used to identify the image only in the region proposal on the real-time street image that may appear rubbish. Different from the traditional object detection algorithm based on the region proposal, the region proposal is obtained on the differential image not whole real-time street image, and the number of the invalid region proposal is greatly reduced. The algorithm has the high mean average precision (mAP).
Optimizing simulated fertilizer additions using a genetic algorithm with a nutrient uptake model
Wendell P. Cropper; N.B. Comerford
2005-01-01
Intensive management of pine plantations in the southeastern coastal plain typically involves weed and pest control, and the addition of fertilizer to meet the high nutrient demand of rapidly growing pines. In this study we coupled a mechanistic nutrient uptake model (SSAND, soil supply and nutrient demand) with a genetic algorithm (GA) in order to estimate the minimum...
Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho
2018-01-01
To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p < 0.001) and for the overall image quality (mean, 3.8; range, 3.3-4.0; p < 0.001). The most preferred anatomical regions were the azygoesophageal recess, thoracic spine, and unobscured lung. The visibility of chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.
NASA Astrophysics Data System (ADS)
Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent
2016-07-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.
NASA Astrophysics Data System (ADS)
Tavakkoli Estahbanat, A.; Dehghani, M.
2017-09-01
In interferometry technique, phases have been modulated between 0-2π. Finding the number of integer phases missed when they were wrapped is the main goal of unwrapping algorithms. Although the density of points in conventional interferometry is high, this is not effective in some cases such as large temporal baselines or noisy interferograms. Due to existing noisy pixels, not only it does not improve results, but also it leads to some unwrapping errors during interferogram unwrapping. In PS technique, because of the sparse PS pixels, scientists are confronted with a problem to unwrap phases. Due to the irregular data separation, conventional methods are sterile. Unwrapping techniques are divided in to path-independent and path-dependent in the case of unwrapping paths. A region-growing method which is a path-dependent technique has been used to unwrap PS data. In this paper an idea of EKF has been generalized on PS data. This algorithm is applied to consider the nonlinearity of PS unwrapping problem as well as conventional unwrapping problem. A pulse-pair method enhanced with singular value decomposition (SVD) has been used to estimate spectral shift from interferometric power spectral density in 7*7 local windows. Furthermore, a hybrid cost-map is used to manage the unwrapping path. This algorithm has been implemented on simulated PS data. To form a sparse dataset, A few points from regular grid are randomly selected and the RMSE of results and true unambiguous phases in presented to validate presented approach. The results of this algorithm and true unwrapped phases were completely identical.
Windowed time-reversal music technique for super-resolution ultrasound imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Labyed, Yassin
Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements.
NASA Astrophysics Data System (ADS)
Nadeem, Syed Ahmed; Hoffman, Eric A.; Sieren, Jered P.; Saha, Punam K.
2018-03-01
Numerous large multi-center studies are incorporating the use of computed tomography (CT)-based characterization of the lung parenchyma and bronchial tree to understand chronic obstructive pulmonary disease status and progression. To the best of our knowledge, there are no fully automated airway tree segmentation methods, free of the need for user review. A failure in even a fraction of segmentation results necessitates manual revision of all segmentation masks which is laborious considering the thousands of image data sets evaluated in large studies. In this paper, we present a novel CT-based airway tree segmentation algorithm using topological leakage detection and freeze-and-grow propagation. The method is fully automated requiring no manual inputs or post-segmentation editing. It uses simple intensity-based connectivity and a freeze-and-grow propagation algorithm to iteratively grow the airway tree starting from an initial seed inside the trachea. It begins with a conservative parameter and then, gradually shifts toward more generous parameter values. The method was applied on chest CT scans of fifteen subjects at total lung capacity. Airway segmentation results were qualitatively assessed and performed comparably to established airway segmentation method with no major visual leakages.
A Multi-Level Approach to Modeling Rapidly Growing Mega-Regions as a Coupled Human-Natural System
NASA Astrophysics Data System (ADS)
Koch, J. A.; Tang, W.; Meentemeyer, R. K.
2013-12-01
The FUTure Urban-Regional Environment Simulation (FUTURES) integrates information on nonstationary drivers of land change (per capita land area demand, site suitability, and spatial structure of conversion events) into spatial-temporal projections of changes in landscape patterns (Meentemeyer et al., 2013). One striking feature of FUTURES is its patch-growth algorithm that includes feedback effects of former development events across several temporal and spatial scales: cell-level transition events are aggregated into patches of land change and their further growth is based on empirically derived parameters controlling its size, shape, and dispersion. Here, we augment the FUTURES modeling framework by expanding its multilevel structure and its representation of human decision making. The new modeling framework is hierarchically organized as nested subsystems including the latest theory on telecouplings in coupled human-natural systems (Liu et al., 2013). Each subsystem represents a specific level of spatial scale and embraces agents that have decision making authority at a particular level. The subsystems are characterized with regard to their spatial representation and are connected via flows of information (e.g. regulations and policies) or material (e.g. population migration). To provide a modeling framework that is applicable to a wide range of settings and geographical regions and to keep it computationally manageable, we implement a 'zooming factor' that allows to enable or disable subsystems (and hence the represented processes), based on the extent of the study region. The implementation of the FUTURES modeling framework for a specific case study follows the observational modeling approach described in Grimm et al. (2005), starting from the analysis of empirical data in order to capture the processes relevant for specific scales and to allow a rigorous calibration and validation of the model application. In this paper, we give an introduction to the basic concept of our modeling approach and describe its strengths and weaknesses. We furthermore use empirical data for the states of North and South Carolina to demonstrate how the modeling framework can be applied to a large, heterogeneous study system with diverse decision-making agents. Grimm et al. (2005) Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology. Science 310, 987-991. Liu et al. (2013) Framing Sustainability in a Telecoupled World. Ecology and Society 18(2), 26. Meentemeyer et al. (2013) FUTURES: Multilevel Simulations of Merging Urban-Rural Landscape Structure Using a Stochastic Patch-Growing Algorithm. Annals of the Association of American Geographers 103(4), 785-807.
Uncertainty Comparison of Visual Sensing in Adverse Weather Conditions†
Lo, Shi-Wei; Wu, Jyh-Horng; Chen, Lun-Chi; Tseng, Chien-Hao; Lin, Fang-Pang; Hsu, Ching-Han
2016-01-01
This paper focuses on flood-region detection using monitoring images. However, adverse weather affects the outcome of image segmentation methods. In this paper, we present an experimental comparison of an outdoor visual sensing system using region-growing methods with two different growing rules—namely, GrowCut and RegGro. For each growing rule, several tests on adverse weather and lens-stained scenes were performed, taking into account and analyzing different weather conditions with the outdoor visual sensing system. The influence of several weather conditions was analyzed, highlighting their effect on the outdoor visual sensing system with different growing rules. Furthermore, experimental errors and uncertainties obtained with the growing rules were compared. The segmentation accuracy of flood regions yielded by the GrowCut, RegGro, and hybrid methods was 75%, 85%, and 87.7%, respectively. PMID:27447642
Fast object detection algorithm based on HOG and CNN
NASA Astrophysics Data System (ADS)
Lu, Tongwei; Wang, Dandan; Zhang, Yanduo
2018-04-01
In the field of computer vision, object classification and object detection are widely used in many fields. The traditional object detection have two main problems:one is that sliding window of the regional selection strategy is high time complexity and have window redundancy. And the other one is that Robustness of the feature is not well. In order to solve those problems, Regional Proposal Network (RPN) is used to select candidate regions instead of selective search algorithm. Compared with traditional algorithms and selective search algorithms, RPN has higher efficiency and accuracy. We combine HOG feature and convolution neural network (CNN) to extract features. And we use SVM to classify. For TorontoNet, our algorithm's mAP is 1.6 percentage points higher. For OxfordNet, our algorithm's mAP is 1.3 percentage higher.
Development of seismic tomography software for hybrid supercomputers
NASA Astrophysics Data System (ADS)
Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton
2015-04-01
Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.
NASA Astrophysics Data System (ADS)
Chuan, Zun Liang; Ismail, Noriszura; Shinyie, Wendy Ling; Lit Ken, Tan; Fam, Soo-Fen; Senawi, Azlyna; Yusoff, Wan Nur Syahidah Wan
2018-04-01
Due to the limited of historical precipitation records, agglomerative hierarchical clustering algorithms widely used to extrapolate information from gauged to ungauged precipitation catchments in yielding a more reliable projection of extreme hydro-meteorological events such as extreme precipitation events. However, identifying the optimum number of homogeneous precipitation catchments accurately based on the dendrogram resulted using agglomerative hierarchical algorithms are very subjective. The main objective of this study is to propose an efficient regionalized algorithm to identify the homogeneous precipitation catchments for non-stationary precipitation time series. The homogeneous precipitation catchments are identified using average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling, while uncentered correlation coefficient as the similarity measure. The regionalized homogeneous precipitation is consolidated using K-sample Anderson Darling non-parametric test. The analysis result shows the proposed regionalized algorithm performed more better compared to the proposed agglomerative hierarchical clustering algorithm in previous studies.
Panek, Jeanne A
2004-03-01
This paper describes 3 years of physiological measurements on ponderosa pine (Pinus ponderosa Dougl. ex Laws.) growing along an ozone concentration gradient in the Sierra Nevada, California, including variables necessary to parameterize, validate and modify photosynthesis and stomatal conductance algorithms used to estimate ozone uptake. At all sites, gas exchange was under tight stomatal control during the growing season. Stomatal conductance was strongly correlated with leaf water potential (R2=0.82), which decreased over the growing season with decreasing soil water content (R2=0.60). Ozone uptake, carbon uptake, and transpirational water loss closely followed the dynamics of stomatal conductance. Peak ozone and CO2 uptake occurred in early summer and declined progressively thereafter. As a result, periods of maximum ozone uptake did not correspond to periods of peak ozone concentration, underscoring the inappropriateness of using current metrics based on concentration (e.g., SUM0, W126 and AOT40) for assessing ozone exposure risk to plants in this climate region. Both Jmax (maximum CO2-saturated photosynthetic rate, limited by electron transport) and Vcmax (maximum rate of Rubisco-limited carboxylation) increased toward the middle of the growing season, then decreased in September. Intrinsic water-use efficiency rose with increasing drought stress, as expected. The ratio of Jmax to Vcmax was similar to literature values of 2.0. Nighttime respiration followed a Q10 of 2.0, but was significantly higher at the high-ozone site. Respiration rates decreased by the end of the summer as a result of decreased metabolic activity and carbon stores.
Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-10-01
The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.
Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-01-01
Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070
Mean field analysis of algorithms for scale-free networks in molecular biology
2017-01-01
The sampling of scale-free networks in Molecular Biology is usually achieved by growing networks from a seed using recursive algorithms with elementary moves which include the addition and deletion of nodes and bonds. These algorithms include the Barabási-Albert algorithm. Later algorithms, such as the Duplication-Divergence algorithm, the Solé algorithm and the iSite algorithm, were inspired by biological processes underlying the evolution of protein networks, and the networks they produce differ essentially from networks grown by the Barabási-Albert algorithm. In this paper the mean field analysis of these algorithms is reconsidered, and extended to variant and modified implementations of the algorithms. The degree sequences of scale-free networks decay according to a powerlaw distribution, namely P(k) ∼ k−γ, where γ is a scaling exponent. We derive mean field expressions for γ, and test these by numerical simulations. Generally, good agreement is obtained. We also found that some algorithms do not produce scale-free networks (for example some variant Barabási-Albert and Solé networks). PMID:29272285
Mean field analysis of algorithms for scale-free networks in molecular biology.
Konini, S; Janse van Rensburg, E J
2017-01-01
The sampling of scale-free networks in Molecular Biology is usually achieved by growing networks from a seed using recursive algorithms with elementary moves which include the addition and deletion of nodes and bonds. These algorithms include the Barabási-Albert algorithm. Later algorithms, such as the Duplication-Divergence algorithm, the Solé algorithm and the iSite algorithm, were inspired by biological processes underlying the evolution of protein networks, and the networks they produce differ essentially from networks grown by the Barabási-Albert algorithm. In this paper the mean field analysis of these algorithms is reconsidered, and extended to variant and modified implementations of the algorithms. The degree sequences of scale-free networks decay according to a powerlaw distribution, namely P(k) ∼ k-γ, where γ is a scaling exponent. We derive mean field expressions for γ, and test these by numerical simulations. Generally, good agreement is obtained. We also found that some algorithms do not produce scale-free networks (for example some variant Barabási-Albert and Solé networks).
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
Remote Sensing of Particulate Organic Carbon Pools in the High-Latitude Oceans
NASA Technical Reports Server (NTRS)
Stramski, Dariusz; Stramska, Malgorzata
2005-01-01
The general goal of this project was to characterize spatial distributions at basin scales and variability on monthly to interannual timescales of particulate organic carbon (POC) in the high-latitude oceans. The primary objectives were: (1) To collect in situ data in the north polar waters of the Atlantic and in the Southern Ocean, necessary for the derivation of POC ocean color algorithms for these regions. (2) To derive regional POC algorithms and refine existing regional chlorophyll (Chl) algorithms, to develop understanding of processes that control bio-optical relationships underlying ocean color algorithms for POC and Chl, and to explain bio-optical differentiation between the examined polar regions and within the regions. (3) To determine basin-scale spatial patterns and temporal variability on monthly to interannual scales in satellite-derived estimates of POC and Chl pools in the investigated regions for the period of time covered by SeaWiFS and MODIS missions.
Advances in Landslide Nowcasting: Evaluation of a Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia Bach; Peters-Lidard, Christa; Adler, Robert; Hong, Yang; Kumar, Sujay; Lerner-Lam, Arthur
2011-01-01
The increasing availability of remotely sensed data offers a new opportunity to address landslide hazard assessment at larger spatial scales. A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that may experience landslide activity. This system combines a calculation of static landslide susceptibility with satellite-derived rainfall estimates and uses a threshold approach to generate a set of nowcasts that classify potentially hazardous areas. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale near real-time landslide hazard assessment efforts, it requires several modifications before it can be fully realized as an operational tool. This study draws upon a prior work s recommendations to develop a new approach for considering landslide susceptibility and hazard at the regional scale. This case study calculates a regional susceptibility map using remotely sensed and in situ information and a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America. The susceptibility map is evaluated with a regional rainfall intensity duration triggering threshold and results are compared with the global algorithm framework for the same event. Evaluation of this regional system suggests that this empirically based approach provides one plausible way to approach some of the data and resolution issues identified in the global assessment. The presented methodology is straightforward to implement, improves upon the global approach, and allows for results to be transferable between regions. The results also highlight several remaining challenges, including the empirical nature of the algorithm framework and adequate information for algorithm validation. Conclusions suggest that integrating additional triggering factors such as soil moisture may help to improve algorithm performance accuracy. The regional algorithm scenario represents an important step forward in advancing regional and global-scale landslide hazard assessment.
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve algorithm performance accuracy include incorporating additional triggering factors such as tectonic activity, anthropogenic impacts and soil moisture into the algorithm calculation. Despite these limitations, the methodology presented in this regional evaluation is both straightforward to calculate and easy to interpret, making results transferable between regions and allowing findings to be placed within an inter-comparison framework. The regional algorithm scenario represents an important step in advancing regional and global-scale landslide hazard assessment and forecasting.
Hierarchical image segmentation via recursive superpixel with adaptive regularity
NASA Astrophysics Data System (ADS)
Nakamura, Kensuke; Hong, Byung-Woo
2017-11-01
A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.
Customizing FP-growth algorithm to parallel mining with Charm++ library
NASA Astrophysics Data System (ADS)
Puścian, Marek
2017-08-01
This paper presents a frequent item mining algorithm that was customized to handle growing data repositories. The proposed solution applies Master Slave scheme to frequent pattern growth technique. Efficient utilization of available computation units is achieved by dynamic reallocation of tasks. Conditional frequent trees are assigned to parallel workers basing on their workload. Proposed enhancements have been successfully implemented using Charm++ library. This paper discusses results of the performance of parallelized FP-growth algorithm against different datasets. The approach has been illustrated with many experiments and measurements performed using multiprocessor and multithreaded computer.
A review on economic emission dispatch problems using quantum computational intelligence
NASA Astrophysics Data System (ADS)
Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.
2016-11-01
Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.
Big Data GPU-Driven Parallel Processing Spatial and Spatio-Temporal Clustering Algorithms
NASA Astrophysics Data System (ADS)
Konstantaras, Antonios; Skounakis, Emmanouil; Kilty, James-Alexander; Frantzeskakis, Theofanis; Maravelakis, Emmanuel
2016-04-01
Advances in graphics processing units' technology towards encompassing parallel architectures [1], comprised of thousands of cores and multiples of parallel threads, provide the foundation in terms of hardware for the rapid processing of various parallel applications regarding seismic big data analysis. Seismic data are normally stored as collections of vectors in massive matrices, growing rapidly in size as wider areas are covered, denser recording networks are being established and decades of data are being compiled together [2]. Yet, many processes regarding seismic data analysis are performed on each seismic event independently or as distinct tiles [3] of specific grouped seismic events within a much larger data set. Such processes, independent of one another can be performed in parallel narrowing down processing times drastically [1,3]. This research work presents the development and implementation of three parallel processing algorithms using Cuda C [4] for the investigation of potentially distinct seismic regions [5,6] present in the vicinity of the southern Hellenic seismic arc. The algorithms, programmed and executed in parallel comparatively, are the: fuzzy k-means clustering with expert knowledge [7] in assigning overall clusters' number; density-based clustering [8]; and a selves-developed spatio-temporal clustering algorithm encompassing expert [9] and empirical knowledge [10] for the specific area under investigation. Indexing terms: GPU parallel programming, Cuda C, heterogeneous processing, distinct seismic regions, parallel clustering algorithms, spatio-temporal clustering References [1] Kirk, D. and Hwu, W.: 'Programming massively parallel processors - A hands-on approach', 2nd Edition, Morgan Kaufman Publisher, 2013 [2] Konstantaras, A., Valianatos, F., Varley, M.R. and Makris, J.P.: 'Soft-Computing Modelling of Seismicity in the Southern Hellenic Arc', Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [3] Papadakis, S. and Diamantaras, K.: 'Programming and architecture of parallel processing systems', 1st Edition, Eds. Kleidarithmos, 2011 [4] NVIDIA.: 'NVidia CUDA C Programming Guide', version 5.0, NVidia (reference book) [5] Konstantaras, A.: 'Classification of Distinct Seismic Regions and Regional Temporal Modelling of Seismicity in the Vicinity of the Hellenic Seismic Arc', IEEE Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6 (4), pp. 1857-1863, 2013 [6] Konstantaras, A. Varley, M.R.,. Valianatos, F., Collins, G. and Holifield, P.: 'Recognition of electric earthquake precursors using neuro-fuzzy models: methodology and simulation results', Proc. IASTED International Conference on Signal Processing Pattern Recognition and Applications (SPPRA 2002), Crete, Greece, 2002, pp 303-308, 2002 [7] Konstantaras, A., Katsifarakis, E., Maravelakis, E., Skounakis, E., Kokkinos, E. and Karapidakis, E.: 'Intelligent Spatial-Clustering of Seismicity in the Vicinity of the Hellenic Seismic Arc', Earth Science Research, vol. 1 (2), pp. 1-10, 2012 [8] Georgoulas, G., Konstantaras, A., Katsifarakis, E., Stylios, C.D., Maravelakis, E. and Vachtsevanos, G.: '"Seismic-Mass" Density-based Algorithm for Spatio-Temporal Clustering', Expert Systems with Applications, vol. 40 (10), pp. 4183-4189, 2013 [9] Konstantaras, A. J.: 'Expert knowledge-based algorithm for the dynamic discrimination of interactive natural clusters', Earth Science Informatics, 2015 (In Press, see: www.scopus.com) [10] Drakatos, G. and Latoussakis, J.: 'A catalog of aftershock sequences in Greece (1971-1997): Their spatial and temporal characteristics', Journal of Seismology, vol. 5, pp. 137-145, 2001
Time reversal and phase coherent music techniques for super-resolution ultrasound imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Labyed, Yassin
Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements. A modified TR-MUSIC imaging algorithm is used to account for ultrasound scattering from both density and compressibility contrasts. The phase response of ultrasound transducer elements is accounted for in a PC-MUSIC system.
Lee, Junghoon; Lee, Joosung; Song, Sangha; Lee, Hyunsook; Lee, Kyoungjoung; Yoon, Youngro
2008-01-01
Automatic detection of suspicious pain regions is very useful in the medical digital infrared thermal imaging research area. To detect those regions, we use the SOFES (Survival Of the Fitness kind of the Evolution Strategy) algorithm which is one of the multimodal function optimization methods. We apply this algorithm to famous diseases, such as a foot of the glycosuria, the degenerative arthritis and the varicose vein. The SOFES algorithm is available to detect some hot spots or warm lines as veins. And according to a hundred of trials, the algorithm is very fast to converge.
NASA Astrophysics Data System (ADS)
Champion, N.
2012-08-01
Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images) and is based on a region-growing procedure. Seeds (corresponding to clouds) are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images). Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011). In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.
Communication Avoiding and Overlapping for Numerical Linear Algebra
2012-05-08
future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve scalability by reducing...linear algebra problems to future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve...will continue to grow relative to the cost of computation. With exascale computing as the long-term goal, the community needs to develop techniques
An Extension of CART's Pruning Algorithm. Program Statistics Research Technical Report No. 91-11.
ERIC Educational Resources Information Center
Kim, Sung-Ho
Among the computer-based methods used for the construction of trees such as AID, THAID, CART, and FACT, the only one that uses an algorithm that first grows a tree and then prunes the tree is CART. The pruning component of CART is analogous in spirit to the backward elimination approach in regression analysis. This idea provides a tool in…
Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan
2018-01-01
Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.
Rapid Mapping Of Floods Using SAR Data: Opportunities And Critical Aspects
NASA Astrophysics Data System (ADS)
Pulvirenti, Luca; Pierdicca, Nazzareno; Chini, Marco
2013-04-01
The potentiality of spaceborne Synthetic Aperture Radar (SAR) for flood mapping was demonstrated by several past investigations. The synoptic view, the capability to operate in almost all-weather conditions and during both day time and night time and the sensitivity of the microwave band to water are the key features that make SAR data useful for monitoring inundation events. In addition, their high spatial resolution, which can reach 1m with the new generation of X-band instruments such as TerraSAR-X and COSMO-SkyMed (CSK), allows emergency managers to use flood maps at very high spatial resolution. CSK gives also the possibility of performing frequent observations of regions hit by floods, thanks to the four-satellite constellation. Current research on flood mapping using SAR is focused on the development of automatic algorithms to be used in near real time applications. The approaches are generally based on the low radar return from smooth open water bodies that behave as specular reflectors and appear dark in SAR images. The major advantage of automatic algorithms is the computational efficiency that makes them suitable for rapid mapping purposes. The choice of the threshold value that, in this kind of algorithms, separates flooded from non-flooded areas is a critical aspect because it depends on the characteristics of the observed scenario and on system parameters. To deal with this aspect an algorithm for automatic detection of the regions of low backscatter has been developed. It basically accomplishes three steps: 1) division of the SAR image in a set of non-overlapping sub-images or splits; 2) selection of inhomogeneous sub-images that contain (at least) two populations of pixels, one of which is formed by dark pixels; 3) the application in sequence of an automatic thresholding algorithm and a region growing algorithm in order to produce a homogeneous map of flooded areas. Besides the aforementioned choice of the threshold, rapid mapping of floods may present other critical aspects. Searching for low SAR backscatter areas only may cause inaccuracies because flooded soils do not always act as smooth open water bodies. The presence of wind or of vegetation emerging above the water surface may give rise to an increase of the radar backscatter. In particular, mapping flooded vegetation using SAR data may represent a difficult task since backscattering phenomena in the volume between canopy, trunks and floodwater are quite complex in the presence of vegetation. A typical phenomenon is the double-bounce effect involving soil and stems or trunks, which is generally enhanced by the floodwater, so that flooded vegetation may appear very bright in a SAR image. Even in the absence of dense vegetation or wind, some regions may appear dark because of artefacts due to topography (shadowing), absorption caused by wet snow, and attenuation caused by heavy precipitating clouds (X-band SARs). Examples of the aforementioned effects that may limit the reliability of flood maps will be presented at the conference and some indications to deal with these effects (e.g. presence of vegetation and of artefacts) will be provided.
NASA Astrophysics Data System (ADS)
Chen, Xueli; Zhang, Qitan; Yang, Defu; Liang, Jimin
2014-01-01
To provide an ideal solution for a specific problem of gastric cancer detection in which low-scattering regions simultaneously existed with both the non- and high-scattering regions, a novel hybrid radiosity-SP3 equation based reconstruction algorithm for bioluminescence tomography was proposed in this paper. In the algorithm, the third-order simplified spherical harmonics approximation (SP3) was combined with the radiosity equation to describe the bioluminescent light propagation in tissues, which provided acceptable accuracy for the turbid medium with both low- and non-scattering regions. The performance of the algorithm was evaluated with digital mouse based simulations and a gastric cancer-bearing mouse based in situ experiment. Primary results demonstrated the feasibility and superiority of the proposed algorithm for the turbid medium with low- and non-scattering regions.
Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1991-01-01
The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.
On the mechanics of continua with boundary energies and growing surfaces
NASA Astrophysics Data System (ADS)
Papastavrou, Areti; Steinmann, Paul; Kuhl, Ellen
2013-06-01
Many biological systems are coated by thin films for protection, selective absorption, or transmembrane transport. A typical example is the mucous membrane covering the airways, the esophagus, and the intestine. Biological surfaces typically display a distinct mechanical behavior from the bulk; in particular, they may grow at different rates. Growth, morphological instabilities, and buckling of biological surfaces have been studied intensely by approximating the surface as a layer of finite thickness; however, growth has never been attributed to the surface itself. Here, we establish a theory of continua with boundary energies and growing surfaces of zero thickness in which the surface is equipped with its own potential energy and is allowed to grow independently of the bulk. In complete analogy to the kinematic equations, the balance equations, and the constitutive equations of a growing solid body, we derive the governing equations for a growing surface. We illustrate their spatial discretization using the finite element method, and discuss their consistent algorithmic linearization. To demonstrate the conceptual differences between volume and surface growth, we simulate the constrained growth of the inner layer of a cylindrical tube. Our novel approach toward continua with growing surfaces is capable of predicting extreme growth of the inner cylindrical surface, which more than doubles its initial area. The underlying algorithmic framework is robust and stable; it allows to predict morphological changes due to surface growth during the onset of buckling and beyond. The modeling of surface growth has immediate biomedical applications in the diagnosis and treatment of asthma, gastritis, obstructive sleep apnoea, and tumor invasion. Beyond biomedical applications, the scientific understanding of growth-induced morphological instabilities and surface wrinkling has important implications in material sciences, manufacturing, and microfabrication, with applications in soft lithography, metrology, and flexible electronics.
Analytical three-point Dixon method: With applications for spiral water-fat imaging.
Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G
2016-02-01
The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.
Aldhaibani, Jaafar A.; Yahya, Abid; Ahmad, R. Badlishah
2014-01-01
The poor capacity at cell boundaries is not enough to meet the growing demand and stringent design which required high capacity and throughput irrespective of user's location in the cellular network. In this paper, we propose new schemes for an optimum fixed relay node (RN) placement in LTE-A cellular network to enhance throughput and coverage extension at cell edge region. The proposed approach mitigates interferences between all nodes and ensures optimum utilization with the optimization of transmitted power. Moreover, we proposed a new algorithm to balance the transmitted power of moving relay node (MR) over cell size and providing required SNR and throughput at the users inside vehicle along with reducing the transmitted power consumption by MR. The numerical analysis along with the simulation results indicates that an improvement in capacity for users is 40% increment at downlink transmission from cell capacity. Furthermore, the results revealed that there is saving nearly 75% from transmitted power in MR after using proposed balancing algorithm. ATDI simulator was used to verify the numerical results, which deals with real digital cartographic and standard formats for terrain. PMID:24672378
Aldhaibani, Jaafar A; Yahya, Abid; Ahmad, R Badlishah
2014-01-01
The poor capacity at cell boundaries is not enough to meet the growing demand and stringent design which required high capacity and throughput irrespective of user's location in the cellular network. In this paper, we propose new schemes for an optimum fixed relay node (RN) placement in LTE-A cellular network to enhance throughput and coverage extension at cell edge region. The proposed approach mitigates interferences between all nodes and ensures optimum utilization with the optimization of transmitted power. Moreover, we proposed a new algorithm to balance the transmitted power of moving relay node (MR) over cell size and providing required SNR and throughput at the users inside vehicle along with reducing the transmitted power consumption by MR. The numerical analysis along with the simulation results indicates that an improvement in capacity for users is 40% increment at downlink transmission from cell capacity. Furthermore, the results revealed that there is saving nearly 75% from transmitted power in MR after using proposed balancing algorithm. ATDI simulator was used to verify the numerical results, which deals with real digital cartographic and standard formats for terrain.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
The MAGIC-5 CAD for nodule detection in low dose and thin slice lung CTs
NASA Astrophysics Data System (ADS)
Cerello, Piergiorgio; MAGIC-5 Collaboration
2010-11-01
Lung cancer is the leading cause of cancer-related mortality in developed countries. Only 10-15% of all men and women diagnosed with lung cancer live 5 years after the diagnosis. However, the 5-year survival rate for patients diagnosed in the early asymptomatic stage of the disease can reach 70%. Early-stage lung cancers can be diagnosed by detecting non-calcified small pulmonary nodules with computed tomography (CT). Computer-aided detection (CAD) could support radiologists in the analysis of the large amount of noisy images generated in screening programs, where low-dose and thin-slice settings are used. The MAGIC-5 project, funded by the Istituto Nazionale di Fisica Nucleare (INFN, Italy) and Ministero dell'Università e della Ricerca (MUR, Italy), developed a multi-method approach based on three CAD algorithms to be used in parallel with a merging of their results: the Channeler Ant Model (CAM), based on Virtual Ant Colonies, the Dot-Enhancement/Pleura Surface Normals/VBNA (DE-PSN-VBNA), and the Region Growing Volume Plateau (RGVP). Preliminary results show quite good performances, to be improved with the refining of the single algorithm and the added value of the results merging.
The impact of drought on ozone dry deposition over eastern Texas
NASA Astrophysics Data System (ADS)
Huang, Ling; McDonald-Buller, Elena C.; McGaughey, Gary; Kimura, Yosuke; Allen, David T.
2016-02-01
Dry deposition represents a critical pathway through which ground-level ozone is removed from the atmosphere. Understanding the effects of drought on ozone dry deposition is essential for air quality modeling and management in regions of the world with recurring droughts. This work applied the widely used Zhang dry deposition algorithm to examine seasonal and interannual changes in estimated ozone dry deposition velocities and component resistances/conductances over eastern Texas during years with drought (2006 and 2011) as well as a year with slightly cooler temperatures and above average rainfall (2007). Simulated area-averaged daytime ozone dry deposition velocities ranged between 0.26 and 0.47 cm/s. Seasonal patterns reflected the combined seasonal variations in non-stomatal and stomatal deposition pathways. Daytime ozone dry deposition velocities during the growing season were consistently larger during 2007 compared to 2006 and 2011. These differences were associated with differences in stomatal conductances and were most pronounced in forested areas. Reductions in stomatal conductances under drought conditions were highly sensitive to increases in vapor pressure deficit and warmer temperatures in Zhang's algorithm. Reductions in daytime ozone deposition velocities and deposition mass during drought years were associated with estimates of higher surface ozone concentrations.
Dark-field microscopic image stitching method for surface defects evaluation of large fine optics.
Liu, Dong; Wang, Shitong; Cao, Pin; Li, Lu; Cheng, Zhongtao; Gao, Xin; Yang, Yongying
2013-03-11
One of the challenges in surface defects evaluation of large fine optics is to detect defects of microns on surfaces of tens or hundreds of millimeters. Sub-aperture scanning and stitching is considered to be a practical and efficient method. But since there are usually few defects on the large aperture fine optics, resulting in no defects or only one run-through line feature in many sub-aperture images, traditional stitching methods encounter with mismatch problem. In this paper, a feature-based multi-cycle image stitching algorithm is proposed to solve the problem. The overlapping areas of sub-apertures are categorized based on the features they contain. Different types of overlapping areas are then stitched in different cycles with different methods. The stitching trace is changed to follow the one that determined by the features. The whole stitching procedure is a region-growing like process. Sub-aperture blocks grow bigger after each cycle and finally the full aperture image is obtained. Comparison experiment shows that the proposed method is very suitable to stitch sub-apertures that very few feature information exists in the overlapping areas and can stitch the dark-field microscopic sub-aperture images very well.
Hsieh, Thomas M; Liu, Yi-Min; Liao, Chun-Chih; Xiao, Furen; Chiang, I-Jen; Wong, Jau-Min
2011-08-26
In recent years, magnetic resonance imaging (MRI) has become important in brain tumor diagnosis. Using this modality, physicians can locate specific pathologies by analyzing differences in tissue character presented in different types of MR images.This paper uses an algorithm integrating fuzzy-c-mean (FCM) and region growing techniques for automated tumor image segmentation from patients with menigioma. Only non-contrasted T1 and T2 -weighted MR images are included in the analysis. The study's aims are to correctly locate tumors in the images, and to detect those situated in the midline position of the brain. The study used non-contrasted T1- and T2-weighted MR images from 29 patients with menigioma. After FCM clustering, 32 groups of images from each patient group were put through the region-growing procedure for pixels aggregation. Later, using knowledge-based information, the system selected tumor-containing images from these groups and merged them into one tumor image. An alternative semi-supervised method was added at this stage for comparison with the automatic method. Finally, the tumor image was optimized by a morphology operator. Results from automatic segmentation were compared to the "ground truth" (GT) on a pixel level. Overall data were then evaluated using a quantified system. The quantified parameters, including the "percent match" (PM) and "correlation ratio" (CR), suggested a high match between GT and the present study's system, as well as a fair level of correspondence. The results were compatible with those from other related studies. The system successfully detected all of the tumors situated at the midline of brain.Six cases failed in the automatic group. One also failed in the semi-supervised alternative. The remaining five cases presented noticeable edema inside the brain. In the 23 successful cases, the PM and CR values in the two groups were highly related. Results indicated that, even when using only two sets of non-contrasted MR images, the system is a reliable and efficient method of brain-tumor detection. With further development the system demonstrates high potential for practical clinical use.
Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series
NASA Astrophysics Data System (ADS)
Champion, Nicolas
2016-06-01
Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.
NASA Astrophysics Data System (ADS)
Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-03-01
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
Generic Entity Resolution in Relational Databases
NASA Astrophysics Data System (ADS)
Sidló, Csaba István
Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.
Experimental quantum computing to solve systems of linear equations.
Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei
2013-06-07
Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.
An interactive system for computer-aided diagnosis of breast masses.
Wang, Xingwei; Li, Lihua; Liu, Wei; Xu, Weidong; Lederman, Dror; Zheng, Bin
2012-10-01
Although mammography is the only clinically accepted imaging modality for screening the general population to detect breast cancer, interpreting mammograms is difficult with lower sensitivity and specificity. To provide radiologists "a visual aid" in interpreting mammograms, we developed and tested an interactive system for computer-aided detection and diagnosis (CAD) of mass-like cancers. Using this system, an observer can view CAD-cued mass regions depicted on one image and then query any suspicious regions (either cued or not cued by CAD). CAD scheme automatically segments the suspicious region or accepts manually defined region and computes a set of image features. Using content-based image retrieval (CBIR) algorithm, CAD searches for a set of reference images depicting "abnormalities" similar to the queried region. Based on image retrieval results and a decision algorithm, a classification score is assigned to the queried region. In this study, a reference database with 1,800 malignant mass regions and 1,800 benign and CAD-generated false-positive regions was used. A modified CBIR algorithm with a new function of stretching the attributes in the multi-dimensional space and decision scheme was optimized using a genetic algorithm. Using a leave-one-out testing method to classify suspicious mass regions, we compared the classification performance using two CBIR algorithms with either equally weighted or optimally stretched attributes. Using the modified CBIR algorithm, the area under receiver operating characteristic curve was significantly increased from 0.865 ± 0.006 to 0.897 ± 0.005 (p < 0.001). This study demonstrated the feasibility of developing an interactive CAD system with a large reference database and achieving improved performance.
Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis
Peng, Zhenyun; Zhang, Yaohui
2014-01-01
Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182
A maximally stable extremal region based scene text localization method
NASA Astrophysics Data System (ADS)
Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei
2015-07-01
Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.
Improved algorithms for estimating Total Alkalinity in Northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Devkota, M.; Dash, P.
2017-12-01
Ocean Acidification (OA) is one of the serious challenges that have significant impacts on ocean. About 25% of anthropologically generated CO2 is absorbed by the oceans which decreases average ocean pH. This change has critical impacts on marine species, ocean ecology, and associated economics. 35 years of observation concluded that the rate of alteration in OA parameters varies geographically with higher variations in the northern Gulf of Mexico (N-GoM). Several studies have suggested that the Mississippi River affects the carbon dynamics of the N-GoM coastal ecosystem significantly. Total Alkalinity (TA) algorithms developed for major ocean basins produce inaccurate estimations in this region. Hence, a local algorithm to estimate TA is the need for this region, which would incorporate the local effects of oceanographic processes and complex spatial influences. In situ data collected in N-GoM region during the GOMECC-I and II cruises, and GISR Cruises (G-1, 3, 5) from 2007 to 2013 were assimilated and used to calculate the efficiency of the existing TA algorithm that uses Sea Surface Temperature (SST) and Sea Surface Salinity (SSS) as explanatory variables. To improve this algorithm, firstly, statistical analyses were performed to improve the coefficients and the functional form of this algorithm. Then, chlorophyll a (Chl-a) was included as an additional explanatory variable in the multiple linear regression approach in addition to SST and SSS. Based on the average concentration of Chl-a for last 15 years, the N-GoM was divided into two regions, and two separate algorithms were developed for each region. Finally, to address spatial non-stationarity, a Geographically Weighted Regression (GWR) algorithm was developed. The existing TA algorithm resulted considerable algorithm bias with a larger bias in the coastal waters. Chl-a as an additional explanatory variable reduced the bias in the residuals and improved the algorithm efficiency. Chl-a worked as a proxy for addressing the organic pump's pronounced effects in the coastal waters. The GWR algorithm provided a raster surface of the coefficients with even more reliable algorithms to estimate TA with least error. The GWR algorithm addressed the spatial non-stationarity of OA in N-GoM, which apparently was not addressed in the previously developed algorithms.
A generalized global alignment algorithm.
Huang, Xiaoqiu; Chao, Kun-Mao
2003-01-22
Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.
Growing a hypercubical output space in a self-organizing feature map.
Bauer, H U; Villmann, T
1997-01-01
Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.
On the performance of SART and ART algorithms for microwave imaging
NASA Astrophysics Data System (ADS)
Aprilliyani, Ria; Prabowo, Rian Gilang; Basari
2018-02-01
The development of advanced technology leads to the change of human lifestyle in current society. One of the disadvantage impact is arising the degenerative diseases such as cancers and tumors, not just common infectious diseases. Every year, victims of cancers and tumors grow significantly leading to one of the death causes in the world. In early stage, cancer/tumor does not have definite symptoms, but it will grow abnormally as tissue cells and damage normal tissue. Hence, early cancer detection is required. Some common diagnostics modalities such as MRI, CT and PET are quite difficult to be operated in home or mobile environment such as ambulance. Those modalities are also high cost, unpleasant, complex, less safety and harder to move. Hence, this paper proposes a microwave imaging system due to its portability and low cost. In current study, we address on the performance of simultaneous algebraic reconstruction technique (SART) algorithm that was applied in microwave imaging. In addition, SART algorithm performance compared with our previous work on algebraic reconstruction technique (ART), in order to have performance comparison, especially in the case of reconstructed image quality. The result showed that by applying SART algorithm on microwave imaging, suspicious cancer/tumor can be detected with better image quality.
Delaunay based algorithm for finding polygonal voids in planar point sets
NASA Astrophysics Data System (ADS)
Alonso, R.; Ojeda, J.; Hitschfeld, N.; Hervías, C.; Campusano, L. E.
2018-01-01
This paper presents a new algorithm to find under-dense regions called voids inside a 2D point set. The algorithm starts from terminal-edges (local longest-edges) in a Delaunay triangulation and builds the largest possible low density terminal-edge regions around them. A terminal-edge region can represent either an entire void or part of a void (subvoid). Using artificial data sets, the case of voids that are detected as several adjacent subvoids is analyzed and four subvoid joining criteria are proposed and evaluated. Since this work is inspired on searches of a more robust, effective and efficient algorithm to find 3D cosmological voids the evaluation of the joining criteria considers this context. However, the design of the algorithm permits its adaption to the requirements of any similar application.
Mizutani, Eiji; Demmel, James W
2003-01-01
This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).
An improved NAS-RIF algorithm for image restoration
NASA Astrophysics Data System (ADS)
Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian
2016-10-01
Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-01-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-02-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
Li, Yong; Yuan, Gonglin; Wei, Zengxin
2015-01-01
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.
A Method of Mapping Burned Area Using Chinese FengYun-3 MERSI Satellite Data
NASA Astrophysics Data System (ADS)
Shan, T.
2017-12-01
Wildfire is a naturally reoccurring global phenomenon which has environmental and ecological consequences such as effects on the global carbon budget, changes to the global carbon cycle and disruption to ecosystem succession. The information of burned area is significant for post disaster assessment, ecosystems protection and restoration. The Medium Resolution Spectral Imager (MERSI) onboard FENGYUN-3C (FY-3C) has shown good ability for fire detection and monitoring but lacks recognition among researchers. In this study, an automated burned area mapping algorithm was proposed based on FY-3C MERSI data. The algorithm is generally divided into two phases: 1) selection of training pixels based on 1000-m resolution MERSI data, which offers more spectral information through the use of more vegetation indices; and 2) classification: first the region growing method is applied to 1000-m MERSI data to calculate the core burned area and then the same classification method is applied to the 250-m MERSI data set by using the core burned area as a seed to obtain results at a finer spatial resolution. An evaluation of the performance of the algorithm was carried out at two study sites in America and Canada. The accuracy assessment and validation were made by comparing our results with reference results derived from Landsat OLI data. The result has a high kappa coefficient and the lower commission error, indicating that this algorithm can improve the burned area mapping accuracy at the two study sites. It may then be possible to use MERSI and other data to fill the gaps in the imaging of burned areas in the future.
A parsimonious tree-grow method for haplotype inference.
Li, Zhenping; Zhou, Wenfeng; Zhang, Xiang-Sun; Chen, Luonan
2005-09-01
Haplotype information has become increasingly important in analyzing fine-scale molecular genetics data, such as disease genes mapping and drug design. Parsimony haplotyping is one of haplotyping problems belonging to NP-hard class. In this paper, we aim to develop a novel algorithm for the haplotype inference problem with the parsimony criterion, based on a parsimonious tree-grow method (PTG). PTG is a heuristic algorithm that can find the minimum number of distinct haplotypes based on the criterion of keeping all genotypes resolved during tree-grow process. In addition, a block-partitioning method is also proposed to improve the computational efficiency. We show that the proposed approach is not only effective with a high accuracy, but also very efficient with the computational complexity in the order of O(m2n) time for n single nucleotide polymorphism sites in m individual genotypes. The software is available upon request from the authors, or from http://zhangroup.aporc.org/bioinfo/ptg/ chen@elec.osaka-sandai.ac.jp Supporting materials is available from http://zhangroup.aporc.org/bioinfo/ptg/bti572supplementary.pdf
Real coded genetic algorithm for fuzzy time series prediction
NASA Astrophysics Data System (ADS)
Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.
2017-10-01
Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.
A class of least-squares filtering and identification algorithms with systolic array architectures
NASA Technical Reports Server (NTRS)
Kalson, Seth Z.; Yao, Kung
1991-01-01
A unified approach is presented for deriving a large class of new and previously known time- and order-recursive least-squares algorithms with systolic array architectures, suitable for high-throughput-rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Exponentially weighted least-squares criteria are considered for both sliding and growing memory.
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
3D Reconstruction of Coronary Artery Vascular Smooth Muscle Cells
Luo, Tong; Chen, Huan; Kassab, Ghassan S.
2016-01-01
Aims The 3D geometry of individual vascular smooth muscle cells (VSMCs), which are essential for understanding the mechanical function of blood vessels, are currently not available. This paper introduces a new 3D segmentation algorithm to determine VSMC morphology and orientation. Methods and Results A total of 112 VSMCs from six porcine coronary arteries were used in the analysis. A 3D semi-automatic segmentation method was developed to reconstruct individual VSMCs from cell clumps as well as to extract the 3D geometry of VSMCs. A new edge blocking model was introduced to recognize cell boundary while an edge growing was developed for optimal interpolation and edge verification. The proposed methods were designed based on Region of Interest (ROI) selected by user and interactive responses of limited key edges. Enhanced cell boundary features were used to construct the cell’s initial boundary for further edge growing. A unified framework of morphological parameters (dimensions and orientations) was proposed for the 3D volume data. Virtual phantom was designed to validate the tilt angle measurements, while other parameters extracted from 3D segmentations were compared with manual measurements to assess the accuracy of the algorithm. The length, width and thickness of VSMCs were 62.9±14.9μm, 4.6±0.6μm and 6.2±1.8μm (mean±SD). In longitudinal-circumferential plane of blood vessel, VSMCs align off the circumferential direction with two mean angles of -19.4±9.3° and 10.9±4.7°, while an out-of-plane angle (i.e., radial tilt angle) was found to be 8±7.6° with median as 5.7°. Conclusions A 3D segmentation algorithm was developed to reconstruct individual VSMCs of blood vessel walls based on optical image stacks. The results were validated by a virtual phantom and manual measurement. The obtained 3D geometries can be utilized in mathematical models and leads a better understanding of vascular mechanical properties and function. PMID:26882342
Automatic lesion tracking for a PET/CT based computer aided cancer therapy monitoring system
NASA Astrophysics Data System (ADS)
Opfer, Roland; Brenner, Winfried; Carlsen, Ingwer; Renisch, Steffen; Sabczynski, Jörg; Wiemker, Rafael
2008-03-01
Response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. However, dealing simultaneously with several PET/CT scans poses a serious workflow problem. It can be a difficult and tedious task to extract response criteria based upon an integrated analysis of PET and CT images and to track these criteria over time. In order to improve the workflow for serial analysis of PET/CT scans we introduce in this paper a fast lesion tracking algorithm. We combine a global multi-resolution rigid registration algorithm with a local block matching and a local region growing algorithm. Whenever the user clicks on a lesion in the base-line PET scan the course of standardized uptake values (SUV) is automatically identified and shown to the user as a graph plot. We have validated our method by a data collection from 7 patients. Each patient underwent two or three PET/CT scans during the course of a cancer therapy. An experienced nuclear medicine physician manually measured the courses of the maximum SUVs for altogether 18 lesions. As a result we obtained that the automatic detection of the corresponding lesions resulted in SUV measurements which are nearly identical to the manually measured SUVs. Between 38 measured maximum SUVs derived from manual and automatic detected lesions we observed a correlation of 0.9994 and a average error of 0.4 SUV units.
NASA Astrophysics Data System (ADS)
Kim, Chang-Won; Kim, Jong-Hyo
2011-03-01
Perfusion CT (PCT) examinations are getting more frequently used for diagnosis of acute brain diseases such as hemorrhage and infarction, because the functional map images it produces such as regional cerebral blood flow (rCBF), regional cerebral blood volume (rCBV), and mean transit time (MTT) may provide critical information in the emergency work-up of patient care. However, a typical PCT scans the same slices several tens of times after injection of contrast agent, which leads to much increased radiation dose and is inevitability of growing concern for radiation-induced cancer risk. Reducing the number of views in projection in combination of TV minimization reconstruction technique is being regarded as an option for radiation reduction. However, reconstruction artifacts due to insufficient number of X-ray projections become problematic especially when high contrast enhancement signals are present or patient's motion occurred. In this study, we present a novel reconstruction technique using contrast-adaptive TpV minimization that can reduce reconstruction artifacts effectively by using different p-norms in high contrast and low contrast objects. In the proposed method, high contrast components are first reconstructed using thresholded projection data and low p-norm total variation to reflect sparseness in both projection and reconstruction spaces. Next, projection data are modified to contain only low contrast objects by creating projection data of reconstructed high contrast components and subtracting them from original projection data. Then, the low contrast projection data are reconstructed by using relatively high p-norm TV minimization technique, and are combined with the reconstructed high contrast component images to produce final reconstructed images. The proposed algorithm was applied to numerical phantom and a clinical data set of brain PCT exam, and the resultant images were compared with those using filtered back projection (FBP) and conventional TV reconstruction algorithm. Our results show the potential of the proposed algorithm for image quality improvement, which in turn may lead to dose reduction.
A study of real-time computer graphic display technology for aeronautical applications
NASA Technical Reports Server (NTRS)
Rajala, S. A.
1981-01-01
The development, simulation, and testing of an algorithm for anti-aliasing vector drawings is discussed. The pseudo anti-aliasing line drawing algorithm is an extension to Bresenham's algorithm for computer control of a digital plotter. The algorithm produces a series of overlapping line segments where the display intensity shifts from one segment to the other in this overlap (transition region). In this algorithm the length of the overlap and the intensity shift are essentially constants because the transition region is an aid to the eye in integrating the segments into a single smooth line.
NASA Astrophysics Data System (ADS)
Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong
2017-10-01
Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.
Building damage assessment using airborne lidar
NASA Astrophysics Data System (ADS)
Axel, Colin; van Aardt, Jan
2017-10-01
The assessment of building damage following a natural disaster is a crucial step in determining the impact of the event itself and gauging reconstruction needs. Automatic methods for deriving damage maps from remotely sensed data are preferred, since they are regarded as being rapid and objective. We propose an algorithm for performing unsupervised building segmentation and damage assessment using airborne light detection and ranging (lidar) data. Local surface properties, including normal vectors and curvature, were used along with region growing to segment individual buildings in lidar point clouds. Damaged building candidates were identified based on rooftop inclination angle, and then damage was assessed using planarity and point height metrics. Validation of the building segmentation and damage assessment techniques were performed using airborne lidar data collected after the Haiti earthquake of 2010. Building segmentation and damage assessment accuracies of 93.8% and 78.9%, respectively, were obtained using lidar point clouds and expert damage assessments of 1953 buildings in heavily damaged regions. We believe this research presents an indication of the utility of airborne lidar remote sensing for increasing the efficiency and speed at which emergency response operations are performed.
Assessment of the Broadleaf Crops Leaf Area Index Product from the Terra MODIS Instrument
NASA Technical Reports Server (NTRS)
Tan, Bin; Hu, Jiannan; Huang, Dong; Yang, Wenze; Zhang, Ping; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.
2005-01-01
The first significant processing of Terra MODIS data, called Collection 3, covered the period from November 2000 to December 2002. The Collection 3 leaf area index (LAI) and fraction vegetation absorbed photosynthetically active radiation (FPAR) products for broadleaf crops exhibited three anomalies (a) high LAI values during the peak growing season, (b) differences in LAI seasonality between the radiative transfer-based main algorithm and the vegetation index based back-up algorithm, and (c) too few retrievals from the main algorithm during the summer period when the crops are at full flush. The cause of these anomalies is a mismatch between reflectances modeled by the algorithm and MODIS measurements. Therefore, the Look-Up-Tables accompanying the algorithm were revised and implemented in Collection 4 processing. The main algorithm with the revised Look-Up-Tables generated retrievals for over 80% of the pixels with valid data. Retrievals from the back-up algorithm, although few, should be used with caution as they are generated from surface reflectances with high uncertainties.
A Semi-supervised Heat Kernel Pagerank MBO Algorithm for Data Classification
2016-07-01
financial predictions, etc. and is finding growing use in text mining studies. In this paper, we present an efficient algorithm for classification of high...video data, set of images, hyperspectral data, medical data, text data, etc. Moreover, the framework provides a way to analyze data whose different...also be incorporated. For text classification, one can use tfidf (term frequency inverse document frequency) to form feature vectors for each document
Fault Identification by Unsupervised Learning Algorithm
NASA Astrophysics Data System (ADS)
Nandan, S.; Mannu, U.
2012-12-01
Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.
Optimal Control of Hybrid Systems in Air Traffic Applications
NASA Astrophysics Data System (ADS)
Kamgarpour, Maryam
Growing concerns over the scalability of air traffic operations, air transportation fuel emissions and prices, as well as the advent of communication and sensing technologies motivate improvements to the air traffic management system. To address such improvements, in this thesis a hybrid dynamical model as an abstraction of the air traffic system is considered. Wind and hazardous weather impacts are included using a stochastic model. This thesis focuses on the design of algorithms for verification and control of hybrid and stochastic dynamical systems and the application of these algorithms to air traffic management problems. In the deterministic setting, a numerically efficient algorithm for optimal control of hybrid systems is proposed based on extensions of classical optimal control techniques. This algorithm is applied to optimize the trajectory of an Airbus 320 aircraft in the presence of wind and storms. In the stochastic setting, the verification problem of reaching a target set while avoiding obstacles (reach-avoid) is formulated as a two-player game to account for external agents' influence on system dynamics. The solution approach is applied to air traffic conflict prediction in the presence of stochastic wind. Due to the uncertainty in forecasts of the hazardous weather, and hence the unsafe regions of airspace for aircraft flight, the reach-avoid framework is extended to account for stochastic target and safe sets. This methodology is used to maximize the probability of the safety of aircraft paths through hazardous weather. Finally, the problem of modeling and optimization of arrival air traffic and runway configuration in dense airspace subject to stochastic weather data is addressed. This problem is formulated as a hybrid optimal control problem and is solved with a hierarchical approach that decouples safety and performance. As illustrated with this problem, the large scale of air traffic operations motivates future work on the efficient implementation of the proposed algorithms.
Flexible Space-Filling Designs for Complex System Simulations
2013-06-01
interior of the experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with...Computer Experiments, Design of Experiments, Genetic Algorithm , Latin Hypercube, Response Surface Methodology, Nearly Orthogonal 15. NUMBER OF PAGES 147...experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with minimal correlations
Time Delay Measurements of Key Generation Process on Smart Cards
2015-03-01
random number generator is available (Chatterjee & Gupta, 2009). The ECC algorithm will grow in usage as information becomes more and more secure. Figure...Worldwide Mobile Enterprise Security Software 2012–2016 Forecast and Analysis), mobile identity and access management is expected to grow by 27.6 percent...iPad, tablets) as well as 80000 BlackBerry phones. The mobility plan itself will be deployed in three phases over 2014, with the first phase
RF Emitter Tracking and Intent Assessment
2013-03-21
telecommunications sector. In 2010 there were 6,000 location-based applications for the iPhone, 900 for the Android and 300 for the Blackberry [2]. An example...Location-Aware Apps Keeps Growing Rapidly - But Very Few are Cross-Platform,” February 2010. [Online]. Available: http: //readwrite.com/2010/02/05...number of location-aware apps keeps growing - but. 3. L. Wang and Q. Xu, “Gps-Free Localization Algorithm for Wireless Sensor Networks,” Sensors, vol. 10
Han, Zhaoying; Thornton-Wells, Tricia A.; Dykens, Elisabeth M.; Gore, John C.; Dawant, Benoit M.
2014-01-01
Deformation Based Morphometry (DBM) is a widely used method for characterizing anatomical differences across groups. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a DBM atlas. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithms on group differences that may be uncovered through DBM. In this study, we compared group atlas creation and DBM results obtained with five well-established non-rigid registration algorithms using thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Bases Algorithm (ABA); (2) The Image Registration Toolkit (IRTK); (3) The FSL Nonlinear Image Registration Tool (FSL); (4) The Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. Results indicate that the choice of algorithm has little effect on the creation of group atlases. However, regions of differences between groups detected with DBM vary from algorithm to algorithm both qualitatively and quantitatively. The unique nature of the data set used in this study also permits comparison of visible anatomical differences between the groups and regions of difference detected by each algorithm. Results show that the interpretation of DBM results is difficult. Four out of the five algorithms we have evaluated detect bilateral differences between the two groups in the insular cortex, the basal ganglia, orbitofrontal cortex, as well as in the cerebellum. These correspond to differences that have been reported in the literature and that are visible in our samples. But our results also show that some algorithms detect regions that are not detected by the others and that the extent of the detected regions varies from algorithm to algorithm. These results suggest that using more than one algorithm when performing DBM studies would increase confidence in the results. Properties of the algorithms such as the similarity measure they maximize and the regularity of the deformation fields, as well as the location of differences detected with DBM, also need to be taken into account in the interpretation process. PMID:22459439
NASA Astrophysics Data System (ADS)
Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid
2017-10-01
Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.
Coello, Christopher; Willoch, Frode; Selnes, Per; Gjerstad, Leif; Fladby, Tormod; Skretting, Arne
2013-05-15
A voxel-based algorithm to correct for partial volume effect in PET brain volumes is presented. This method (named LoReAn) is based on MRI based segmentation of anatomical regions and accurate measurements of the effective point spread function of the PET imaging process. The objective is to correct for the spill-out of activity from high-uptake anatomical structures (e.g. grey matter) into low-uptake anatomical structures (e.g. white matter) in order to quantify physiological uptake in the white matter. The new algorithm is presented and validated against the state of the art region-based geometric transfer matrix (GTM) method with synthetic and clinical data. Using synthetic data, both bias and coefficient of variation were improved in the white matter region using LoReAn compared to GTM. An increased number of anatomical regions doesn't affect the bias (<5%) and misregistration affects equally LoReAn and GTM algorithms. The LoReAn algorithm appears to be a simple and promising voxel-based algorithm for studying metabolism in white matter regions. Copyright © 2013 Elsevier Inc. All rights reserved.
Global Contrast Based Salient Region Detection.
Cheng, Ming-Ming; Mitra, Niloy J; Huang, Xiaolei; Torr, Philip H S; Hu, Shi-Min
2015-03-01
Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
eqMAXEL: A new automatic earthquake location algorithm implementation for Earthworm
NASA Astrophysics Data System (ADS)
Lisowski, S.; Friberg, P. A.; Sheen, D. H.
2017-12-01
A common problem with automated earthquake location systems for a local to regional scale seismic network is false triggering and false locations inside the network caused by larger regional to teleseismic distance earthquakes. This false location issue also presents a problem for earthquake early warning systems where societal impacts of false alarms can be very expensive. Towards solving this issue, Sheen et al. (2016) implemented a robust maximum-likelihood earthquake location algorithm known as MAXEL. It was shown with both synthetics and real-data for a small number of arrivals, that large regional events were easily identifiable through metrics in the MAXEL algorithm. In the summer of 2017, we collaboratively implemented the MAXEL algorithm into a fully functional Earthworm module and tested it in regions of the USA where false detections and alarming are observed. We show robust improvement in the ability of the Earthworm system to filter out regional and teleseismic events that would have falsely located inside the network using the traditional Earthworm hypoinverse solution. We also explore using different grid sizes in the implementation of the MAXEL algorithm, which was originally designed with South Korea as the target network size.
Regional-scale calculation of the LS factor using parallel processing
NASA Astrophysics Data System (ADS)
Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong
2015-05-01
With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.
1997-01-01
Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.
A., Javadpour; A., Mohammadi
2016-01-01
Background Regarding the importance of right diagnosis in medical applications, various methods have been exploited for processing medical images solar. The method of segmentation is used to analyze anal to miscall structures in medical imaging. Objective This study describes a new method for brain Magnetic Resonance Image (MRI) segmentation via a novel algorithm based on genetic and regional growth. Methods Among medical imaging methods, brains MRI segmentation is important due to high contrast of non-intrusive soft tissue and high spatial resolution. Size variations of brain tissues are often accompanied by various diseases such as Alzheimer’s disease. As our knowledge about the relation between various brain diseases and deviation of brain anatomy increases, MRI segmentation is exploited as the first step in early diagnosis. In this paper, regional growth method and auto-mate selection of initial points by genetic algorithm is used to introduce a new method for MRI segmentation. Primary pixels and similarity criterion are automatically by genetic algorithms to maximize the accuracy and validity in image segmentation. Results By using genetic algorithms and defining the fixed function of image segmentation, the initial points for the algorithm were found. The proposed algorithms are applied to the images and results are manually selected by regional growth in which the initial points were compared. The results showed that the proposed algorithm could reduce segmentation error effectively. Conclusion The study concluded that the proposed algorithm could reduce segmentation error effectively and help us to diagnose brain diseases. PMID:27672629
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Saliency detection algorithm based on LSC-RC
NASA Astrophysics Data System (ADS)
Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu
2018-02-01
Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.
Adaptive grid methods for RLV environment assessment and nozzle analysis
NASA Technical Reports Server (NTRS)
Thornburg, Hugh J.
1996-01-01
Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation, forcing functions to attract/repel points in an elliptic system, or to trigger local refinement, based upon application of an equidistribution principle. The popularity of solution-adaptive techniques is growing in tandem with unstructured methods. The difficultly of precisely controlling mesh densities and orientations with current unstructured grid generation systems has driven the use of solution-adaptive meshing. Use of derivatives of density or pressure are widely used for construction of such weight functions, and have been proven very successful for inviscid flows with shocks. However, less success has been realized for flowfields with viscous layers, vortices or shocks of disparate strength. It is difficult to maintain the appropriate mesh point spacing in the various regions which require a fine spacing for adequate resolution. Mesh points often migrate from important regions due to refinement of dominant features. An example of this is the well know tendency of adaptive methods to increase the resolution of shocks in the flowfield around airfoils, but in the incorrect location due to inadequate resolution of the stagnation region. This problem has been the motivation for this research.
Algorithm Engineering: Concepts and Practice
NASA Astrophysics Data System (ADS)
Chimani, Markus; Klein, Karsten
Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.
Oscillometric Blood Pressure Estimation: Past, Present, and Future.
Forouzanfar, Mohamad; Dajani, Hilmi R; Groza, Voicu Z; Bolic, Miodrag; Rajan, Sreeraman; Batkin, Izmail
2015-01-01
The use of automated blood pressure (BP) monitoring is growing as it does not require much expertise and can be performed by patients several times a day at home. Oscillometry is one of the most common measurement methods used in automated BP monitors. A review of the literature shows that a large variety of oscillometric algorithms have been developed for accurate estimation of BP but these algorithms are scattered in many different publications or patents. Moreover, considering that oscillometric devices dominate the home BP monitoring market, little effort has been made to survey the underlying algorithms that are used to estimate BP. In this review, a comprehensive survey of the existing oscillometric BP estimation algorithms is presented. The survey covers a broad spectrum of algorithms including the conventional maximum amplitude and derivative oscillometry as well as the recently proposed learning algorithms, model-based algorithms, and algorithms that are based on analysis of pulse morphology and pulse transit time. The aim is to classify the diverse underlying algorithms, describe each algorithm briefly, and discuss their advantages and disadvantages. This paper will also review the artifact removal techniques in oscillometry and the current standards for the automated BP monitors.
The Clark Phase-able Sample Size Problem: Long-Range Phasing and Loss of Heterozygosity in GWAS
NASA Astrophysics Data System (ADS)
Halldórsson, Bjarni V.; Aguiar, Derek; Tarpine, Ryan; Istrail, Sorin
A phase transition is taking place today. The amount of data generated by genome resequencing technologies is so large that in some cases it is now less expensive to repeat the experiment than to store the information generated by the experiment. In the next few years it is quite possible that millions of Americans will have been genotyped. The question then arises of how to make the best use of this information and jointly estimate the haplotypes of all these individuals. The premise of the paper is that long shared genomic regions (or tracts) are unlikely unless the haplotypes are identical by descent (IBD), in contrast to short shared tracts which may be identical by state (IBS). Here we estimate for populations, using the US as a model, what sample size of genotyped individuals would be necessary to have sufficiently long shared haplotype regions (tracts) that are identical by descent (IBD), at a statistically significant level. These tracts can then be used as input for a Clark-like phasing method to obtain a complete phasing solution of the sample. We estimate in this paper that for a population like the US and about 1% of the people genotyped (approximately 2 million), tracts of about 200 SNPs long are shared between pairs of individuals IBD with high probability which assures the Clark method phasing success. We show on simulated data that the algorithm will get an almost perfect solution if the number of individuals being SNP arrayed is large enough and the correctness of the algorithm grows with the number of individuals being genotyped.
Meng, Qier; Kitasaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Ueno, Junji; Mori, Kensaku
2017-02-01
Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.
Quantification of intraventricular blood clot in MR-guided focused ultrasound surgery
NASA Astrophysics Data System (ADS)
Hess, Maggie; Looi, Thomas; Lasso, Andras; Fichtinger, Gabor; Drake, James
2015-03-01
Intraventricular hemorrhage (IVH) affects nearly 15% of preterm infants. It can lead to ventricular dilation and cognitive impairment. To ablate IVH clots, MR-guided focused ultrasound surgery (MRgFUS) is investigated. This procedure requires accurate, fast and consistent quantification of ventricle and clot volumes. We developed a semi-autonomous segmentation (SAS) algorithm for measuring changes in the ventricle and clot volumes. Images are normalized, and then ventricle and clot masks are registered to the images. Voxels of the registered masks and voxels obtained by thresholding the normalized images are used as seed points for competitive region growing, which provides the final segmentation. The user selects the areas of interest for correspondence after thresholding and these selections are the final seeds for region growing. SAS was evaluated on an IVH porcine model. SAS was compared to ground truth manual segmentation (MS) for accuracy, efficiency, and consistency. Accuracy was determined by comparing clot and ventricle volumes produced by SAS and MS, and comparing contours by calculating 95% Hausdorff distances between the two labels. In Two-One-Sided Test, SAS and MS were found to be significantly equivalent (p < 0.01). SAS on average was found to be 15 times faster than MS (p < 0.01). Consistency was determined by repeated segmentation of the same image by both SAS and manual methods, SAS being significantly more consistent than MS (p < 0.05). SAS is a viable method to quantify the IVH clot and the lateral brain ventricles and it is serving in a large-scale porcine study of MRgFUS treatment of IVH clot lysis.
Riccardi, Alessandro; Petkov, Todor Sergueev; Ferri, Gianluca; Masotti, Matteo; Campanini, Renato
2011-04-01
The authors presented a novel system for automated nodule detection in lung CT exams. The approach is based on (1) a lung tissue segmentation preprocessing step, composed of histogram thresholding, seeded region growing, and mathematical morphology; (2) a filtering step, whose aim is the preliminary detection of candidate nodules (via 3D fast radial filtering) and estimation of their geometrical features (via scale space analysis); and (3) a false positive reduction (FPR) step, comprising a heuristic FPR, which applies thresholds based on geometrical features, and a supervised FPR, which is based on support vector machines classification, which in turn, is enhanced by a feature extraction algorithm based on maximum intensity projection processing and Zernike moments. The system was validated on 154 chest axial CT exams provided by the lung image database consortium public database. The authors obtained correct detection of 71% of nodules marked by all radiologists, with a false positive rate of 6.5 false positives per patient (FP/patient). A higher specificity of 2.5 FP/patient was reached with a sensitivity of 60%. An independent test on the ANODE09 competition database obtained an overall score of 0.310. The system shows a novel approach to the problem of lung nodule detection in CT scans: It relies on filtering techniques, image transforms, and descriptors rather than region growing and nodule segmentation, and the results are comparable to those of other recent systems in literature and show little dependency on the different types of nodules, which is a good sign of robustness.
Constrained motion model of mobile robots and its applications.
Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong
2009-06-01
Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.
Multiple feature fusion via covariance matrix for visual tracking
NASA Astrophysics Data System (ADS)
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Agent-Based Multicellular Modeling for Predictive Toxicology
Biological modeling is a rapidly growing field that has benefited significantly from recent technological advances, expanding traditional methods with greater computing power, parameter-determination algorithms, and the development of novel computational approaches to modeling bi...
Directional effects on NDVI and LAI retrievals from MODIS: A case study in Brazil with soybean
NASA Astrophysics Data System (ADS)
Breunig, Fábio Marcelo; Galvão, Lênio Soares; Formaggio, Antônio Roberto; Epiphanio, José Carlos Neves
2011-02-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) is largely used to estimate Leaf Area Index (LAI) using radiative transfer modeling (the "main" algorithm). When this algorithm fails for a pixel, which frequently occurs over Brazilian soybean areas, an empirical model (the "backup" algorithm) based on the relationship between the Normalized Difference Vegetation Index (NDVI) and LAI is utilized. The objective of this study is to evaluate directional effects on NDVI and subsequent LAI estimates using global (biome 3) and local empirical models, as a function of the soybean development in two growing seasons (2004-2005 and 2005-2006). The local model was derived from the pixels that had LAI values retrieved from the main algorithm. In order to keep the reproductive stage for a given cultivar as a constant factor while varying the viewing geometry, pairs of MODIS images acquired in close dates from opposite directions (backscattering and forward scattering) were selected. Linear regression relationships between the NDVI values calculated from these two directions were evaluated for different view angles (0-25°; 25-45°; 45-60°) and development stages (<45; 45-90; >90 days after planting). Impacts on LAI retrievals were analyzed. Results showed higher reflectance values in backscattering direction due to the predominance of sunlit soybean canopy components towards the sensor and higher NDVI values in forward scattering direction due to stronger shadow effects in the red waveband. NDVI differences between the two directions were statistically significant for view angles larger than 25°. The main algorithm for LAI estimation failed in the two growing seasons with gradual crop development. As a result, up to 94% of the pixels had LAI values calculated from the backup algorithm at the peak of canopy closure. Most of the pixels selected to compose the 8-day MODIS LAI product came from the forward scattering view because it displayed larger LAI values than the backscattering. Directional effects on the subsequent LAI retrievals were stronger at the peak of the soybean development (NDVI values between 0.70 and 0.85). When the global empirical model was used, LAI differences up to 3.2 for consecutive days and opposite viewing directions were observed. Such differences were reduced to values up to 1.5 with the local model. Because of the predominance of LAI retrievals from the MODIS backup algorithm during the Brazilian soybean development, care is necessary if one considers using these data in agronomic growing/yield models.
Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib
2008-10-01
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.
Lung partitioning for x-ray CAD applications
NASA Astrophysics Data System (ADS)
Annangi, Pavan; Raja, Anand
2011-03-01
Partitioning the inside region of lung into homogeneous regions becomes a crucial step in any computer-aided diagnosis applications based on chest X-ray. The ribs, air pockets and clavicle occupy major space inside the lung as seen in the chest x-ray PA image. Segmenting the ribs and clavicle to partition the lung into homogeneous regions forms a crucial step in any CAD application to better classify abnormalities. In this paper we present two separate algorithms to segment ribs and the clavicle bone in a completely automated way. The posterior ribs are segmented based on Phase congruency features and the clavicle is segmented using Mean curvature features followed by Radon transform. Both the algorithms work on the premise that the presentation of each of these anatomical structures inside the left and right lung has a specific orientation range within which they are confined to. The search space for both the algorithms is limited to the region inside the lung, which is obtained by an automated lung segmentation algorithm that was previously developed in our group. Both the algorithms were tested on 100 images of normal and patients affected with Pneumoconiosis.
Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter
2018-01-01
Introduction Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However—due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. Material and methods In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Results Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Discussion Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works. PMID:29746490
Changes to the COS Extraction Algorithm for Lifetime Position 3
NASA Astrophysics Data System (ADS)
Proffitt, Charles R.; Bostroem, K. Azalee; Ely, Justin; Foster, Deatrick; Hernandez, Svea; Hodge, Philip; Jedrzejewski, Robert I.; Lockwood, Sean A.; Massa, Derck; Peeples, Molly S.; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Roman-Duval, Julia; Sana, Hugues; Sahnow, David J.; Sonnentrucker, Paule; Taylor, Joanna M.
2015-09-01
The COS FUV Detector Lifetime Position 3 (LP3) has been placed only 2.5" below the original lifetime position (LP1). This is sufficiently close to gain-sagged regions at LP1 that a revised extraction algorithm is needed to ensure good spectral quality. We provide an overview of this new "TWOZONE" extraction algorithm, discuss its strengths and limitations, describe new output columns in the X1D files that show the boundaries of the new extraction regions, and provide some advice on how to manually tune the algorithm for specialized applications.
Embedded assessment algorithms within home-based cognitive computer game exercises for elders.
Jimison, Holly; Pavel, Misha
2006-01-01
With the recent consumer interest in computer-based activities designed to improve cognitive performance, there is a growing need for scientific assessment algorithms to validate the potential contributions of cognitive exercises. In this paper, we present a novel methodology for incorporating dynamic cognitive assessment algorithms within computer games designed to enhance cognitive performance. We describe how this approach works for variety of computer applications and describe cognitive monitoring results for one of the computer game exercises. The real-time cognitive assessments also provide a control signal for adapting the difficulty of the game exercises and providing tailored help for elders of varying abilities.
A New, More Physically Based Algorithm, for Retrieving Aerosol Properties over Land from MODIS
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Kaufman, Yoram J.; Remer, Lorraine A.; Mattoo, Shana
2004-01-01
The MOD Imaging Spectrometer (MODIS) has been successfully retrieving aerosol properties, beginning in early 2000 from Terra and from mid 2002 from Aqua. Over land, the retrieval algorithm makes use of three MODIS channels, in the blue, red and infrared wavelengths. As part of the validation exercises, retrieved spectral aerosol optical thickness (AOT) has been compared via scatterplots against spectral AOT measured by the global Aerosol Robotic NETwork (AERONET). On one hand, global and long term validation looks promising, with two-thirds (average plus and minus one standard deviation) of all points falling between published expected error bars. On the other hand, regression of these points shows a positive y-offset and a slope less than 1.0. For individual regions, such as along the U.S. East Coast, the offset and slope are even worse. Here, we introduce an overhaul of the algorithm for retrieving aerosol properties over land. Some well-known weaknesses in the current aerosol retrieval from MODIS include: a) rigid assumptions about the underlying surface reflectance, b) limited aerosol models to choose from, c) simplified (scalar) radiative transfer (RT) calculations used to simulate satellite observations, and d) assumption that aerosol is transparent in the infrared channel. The new algorithm attempts to address all four problems: a) The new algorithm will include surface type information, instead of fixed ratios of the reflectance in the visible channels to the mid-IR reflectance. b) It will include updated aerosol optical properties to reflect the growing aerosol retrieved from eight-plus years of AERONE". operation. c) The effects of polarization will be including using vector RT calculations. d) Most importantly, the new algorithm does not assume that aerosol is transparent in the infrared channel. It will be an inversion of reflectance observed in the three channels (blue, red, and infrared), rather than iterative single channel retrievals. Thus, this new formulation of the MODIS aerosol retrieval over land includes more physically based surface, aerosol and radiative transfer with fewer potentially erroneous assumptions.
Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard
2014-06-26
A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem.
2014-01-01
Background A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. Results This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. Conclusions The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem. PMID:24965213
Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services
NASA Astrophysics Data System (ADS)
Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.
Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability
Skull removal in MR images using a modified artificial bee colony optimization algorithm.
Taherdangkoo, Mohammad
2014-01-01
Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.
A practical salient region feature based 3D multi-modality registration method for medical images
NASA Astrophysics Data System (ADS)
Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang
2006-03-01
We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569
Relationships of a growing magnetic flux region to flares
NASA Technical Reports Server (NTRS)
Martin, S. F.; Bentley, R. D.; Schadee, A.; Antalova, A.; Kucera, A.; Dezso, L.; Gesztelyi, L.; Harvey, K. L.; Jones, H.; Livi, S. H. B.
1984-01-01
The evolution of flare sites at the boundaries of major new and growing magnetic flux regions within complexes of active regions has been analyzed using H-alpha images. A spectrum of possible relationships of growing flux regions to flares is described. An 'intimate' interaction between old and new flux and flare sites occurs at the boundaries of their regions. Forced or 'intimidated' interaction involves new flux pushing older, lower flux density fields toward a neighboring old polarity inversion line, followed by the occurrence of a flare. In 'influential' interaction, magnetic lines of force over an old polarity inversion line reconnect to new emerging flux, and a flare occurs when the magnetic field overlying the filament becomes too weak to prevent its eruption. 'Inconsequential' interaction occurs when a new flux region is too small or has the wrong orientation for creating flare conditions. 'Incidental' interaction involves a flare occurring without any significant relationship to new flux regions.
Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling
NASA Technical Reports Server (NTRS)
Brown, Matthew; Johnston, Mark D.
2013-01-01
Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.
Compression of electromyographic signals using image compression techniques.
Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira
2008-01-01
Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.
False-nearest-neighbors algorithm and noise-corrupted time series
NASA Astrophysics Data System (ADS)
Rhodes, Carl; Morari, Manfred
1997-05-01
The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented.
NASA Astrophysics Data System (ADS)
Son, Young-Sun; Kim, Hyun-cheol
2018-05-01
Chlorophyll (Chl) concentration is one of the key indicators identifying changes in the Arctic marine ecosystem. However, current Chl algorithms are not accurate in the Arctic Ocean due to different bio-optical properties from those in the lower latitude oceans. In this study, we evaluated the current Chl algorithms and analyzed the cause of the error in the western coastal waters of Svalbard, which are known to be sensitive to climate change. The NASA standard algorithms showed to overestimate the Chl concentration in the region. This was due to the high non-algal particles (NAP) absorption and colored dissolved organic matter (CDOM) variability at the blue wavelength. In addition, at lower Chl concentrations (0.1-0.3 mg m-3), chlorophyll-specific absorption coefficients were ∼2.3 times higher than those of other Arctic oceans. This was another reason for the overestimation of Chl concentration. OC4 algorithm-based regionally tuned-Svalbard Chl (SC4) algorithm for retrieving more accurate Chl estimates reduced the mean absolute percentage difference (APD) error from 215% to 49%, the mean relative percentage difference (RPD) error from 212% to 16%, and the normalized root mean square (RMS) error from 211% to 68%. This region has abundant suspended matter due to the melting of tidal glaciers. We evaluated the performance of total suspended matter (TSM) algorithms. Previous published TSM algorithms generally overestimated the TSM concentration in this region. The Svalbard TSM-single band algorithm for low TSM range (ST-SB-L) decreased the APD and RPD errors by 52% and 14%, respectively, but the RMS error still remained high (105%).
Temperature, Sowing and Harvest Dates, and Yield of Maize in the Southwestern US
NASA Astrophysics Data System (ADS)
Kafatos, M.; Stack, D.; Myoung, B.; Kim, S. H.; Kim, J.
2014-12-01
Since sowing date of maize is sensitive to climate variability and changes, it is of a practical importance to examine how sowing dates affect maize yields in various temperature regimes in the southwestern US. A 21-year (1991-2011) simulation of maize yield using Agricultural Production Systems sIMulator (APSIM) with observed meteorological forcing, shows that earlier sowing dates are favorable for higher yields primarily by increasing the length of growing season in cold mountaineous regions. In these regions, warmer conditions in the sowing period tend to advance the sowing date and then enhance yield. Over low-elevation warm regions, yields are less correlated with sowing dates and the length of growing season, perhaps because growing season temperatures are high enough for fast growth. Instead, in the warm regions, maize yields are sensitive to temperature variations during the late growing season due to adverse effects of extreme high temperature events on maize development.
Blinova, Ilona; Chmielewski, Frank-Michael
2015-06-01
Anomalies in the timing of the thermal growing season have become obvious in the NE part of Fennoscandia since 2000. They are in accordance with climatic changes reported for Europe and Fennoscandia. The actual length of the growing season reached 120 days on average, onset on 30 May and ending on 27 September (1981-2010). Shifts in the timing of the growing season and its mean prolongation by 18.5 days/62a are demonstrated for Murmansk Region (1951-2012). In this period, the onset of the growing season advanced by 7.1 days/62a, while the end was extended by 11.4 days/62a. The delay in the end of the growing season is similar to the entire Fennoscandian pattern but it has not been detected in the rest of Europe. The regional pattern of climatic regimes in Murmansk Region remained stable in comparison with earlier climatic maps (1971). However, the actual shifts in the timing of the growing season were more pronounced in colder (oceanic and mountainous) parts. Recent climatic trends could influence the retreat of the tundra zone and changes in the forest line. Losses of tundra biodiversity and enrichment of the northern taiga by southern species could be expected from present climatic trends.
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
NASA Technical Reports Server (NTRS)
Doxley, Charles A.
2016-01-01
In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.
An Analysis of Navigation Algorithms for Smartphones Using J2ME
NASA Astrophysics Data System (ADS)
Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.
Embedded systems are considered one of the most potential areas for future innovations. Two embedded fields that will most certainly take a primary role in future innovations are mobile robotics and mobile computing. Mobile robots and smartphones are growing in number and functionalities, becoming a presence in our daily life. In this paper, we study the current feasibility of a smartphone to execute navigation algorithms. As a test case, we use a smartphone to control an autonomous mobile robot. We tested three navigation problems: Mapping, Localization and Path Planning. For each of these problems, an algorithm has been chosen, developed in J2ME, and tested on the field. Results show the current mobile Java capacity for executing computationally demanding algorithms and reveal the real possibility of using smartphones for autonomous navigation.
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
NASA Astrophysics Data System (ADS)
Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.
2016-03-01
The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.
Fabric pilling measurement using three-dimensional image
NASA Astrophysics Data System (ADS)
Ouyang, Wenbin; Wang, Rongwu; Xu, Bugao
2013-10-01
We introduce a stereovision system and the three-dimensional (3-D) image analysis algorithms for fabric pilling measurement. Based on the depth information available in the 3-D image, the pilling detection process starts from the seed searching at local depth maxima to the region growing around the selected seeds using both depth and distance criteria. After the pilling detection, the density, height, and area of individual pills in the image can be extracted to describe the pilling appearance. According to the multivariate regression analysis on the 3-D images of 30 cotton fabrics treated by the random-tumble and home-laundering machines, the pilling grade is highly correlated with the pilling density (R=0.923) but does not consistently change with the pilling height and area. The pilling densities measured from the 3-D images also correlate well with those counted manually from the samples (R=0.985).
Automated feature extraction in color retinal images by a model based approach.
Li, Huiqi; Chutatape, Opas
2004-02-01
Color retinal photography is an important tool to detect the evidence of various eye diseases. Novel methods to extract the main features in color retinal images have been developed in this paper. Principal component analysis is employed to locate optic disk; A modified active shape model is proposed in the shape detection of optic disk; A fundus coordinate system is established to provide a better description of the features in the retinal images; An approach to detect exudates by the combined region growing and edge detection is proposed. The success rates of disk localization, disk boundary detection, and fovea localization are 99%, 94%, and 100%, respectively. The sensitivity and specificity of exudate detection are 100% and 71%, correspondingly. The success of the proposed algorithms can be attributed to the utilization of the model-based methods. The detection and analysis could be applied to automatic mass screening and diagnosis of the retinal diseases.
Airborne gravimetry, altimetry, and GPS navigation errors
NASA Technical Reports Server (NTRS)
Colombo, Oscar L.
1992-01-01
Proper interpretation of airborne gravimetry and altimetry requires good knowledge of aircraft trajectory. Recent advances in precise navigation with differential GPS have made it possible to measure gravity from the air with accuracies of a few milligals, and to obtain altimeter profiles of terrain or sea surface correct to one decimeter. These developments are opening otherwise inaccessible regions to detailed geophysical mapping. Navigation with GPS presents some problems that grow worse with increasing distance from a fixed receiver: the effect of errors in tropospheric refraction correction, GPS ephemerides, and the coordinates of the fixed receivers. Ionospheric refraction and orbit error complicate ambiguity resolution. Optimal navigation should treat all error sources as unknowns, together with the instantaneous vehicle position. To do so, fast and reliable numerical techniques are needed: efficient and stable Kalman filter-smoother algorithms, together with data compression and, sometimes, the use of simplified dynamics.
Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna
2018-01-01
Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.
Two methods of Haustral fold detection from computed tomographic virtual colonoscopy images
NASA Astrophysics Data System (ADS)
Chowdhury, Ananda S.; Tan, Sovira; Yao, Jianhua; Linguraru, Marius G.; Summers, Ronald M.
2009-02-01
Virtual colonoscopy (VC) has gained popularity as a new colon diagnostic method over the last decade. VC is a new, less invasive alternative to the usually practiced optical colonoscopy for colorectal polyp and cancer screening, the second major cause of cancer related deaths in industrial nations. Haustral (colonic) folds serve as important landmarks for virtual endoscopic navigation in the existing computer-aided-diagnosis (CAD) system. In this paper, we propose and compare two different methods of haustral fold detection from volumetric computed tomographic virtual colonoscopy images. The colon lumen is segmented from the input using modified region growing and fuzzy connectedness. The first method for fold detection uses a level set that evolves on a mesh representation of the colon surface. The colon surface is obtained from the segmented colon lumen using the Marching Cubes algorithm. The second method for fold detection, based on a combination of heat diffusion and fuzzy c-means algorithm, is employed on the segmented colon volume. Folds obtained on the colon volume using this method are then transferred to the corresponding colon surface. After experimentation with different datasets, results are found to be promising. The results also demonstrate that the first method has a tendency of slight under-segmentation while the second method tends to slightly over-segment the folds.
A semi-automated algorithm for hypothalamus volumetry in 3 Tesla magnetic resonance images.
Wolff, Julia; Schindler, Stephanie; Lucas, Christian; Binninger, Anne-Sophie; Weinrich, Luise; Schreiber, Jan; Hegerl, Ulrich; Möller, Harald E; Leitzke, Marco; Geyer, Stefan; Schönknecht, Peter
2018-07-30
The hypothalamus, a small diencephalic gray matter structure, is part of the limbic system. Volumetric changes of this structure occur in psychiatric diseases, therefore there is increasing interest in precise volumetry. Based on our detailed volumetry algorithm for 7 Tesla magnetic resonance imaging (MRI), we developed a method for 3 Tesla MRI, adopting anatomical landmarks and work in triplanar view. We overlaid T1-weighted MR images with gray matter-tissue probability maps to combine anatomical information with tissue class segmentation. Then, we outlined regions of interest (ROIs) that covered potential hypothalamus voxels. Within these ROIs, seed growing technique helped define the hypothalamic volume using gray matter probabilities from the tissue probability maps. This yielded a semi-automated method with short processing times of 20-40 min per hypothalamus. In the MRIs of ten subjects, reliabilities were determined as intraclass correlations (ICC) and volume overlaps in percent. Three raters achieved very good intra-rater reliabilities (ICC 0.82-0.97) and good inter-rater reliabilities (ICC 0.78 and 0.82). Overlaps of intra- and inter-rater runs were very good (≥ 89.7%). We present a fast, semi-automated method for in vivo hypothalamus volumetry in 3 Tesla MRI. Copyright © 2018 Elsevier B.V. All rights reserved.
Storelli, L; Pagani, E; Rocca, M A; Horsfield, M A; Gallo, A; Bisecco, A; Battaglini, M; De Stefano, N; Vrenken, H; Thomas, D L; Mancini, L; Ropele, S; Enzinger, C; Preziosa, P; Filippi, M
2016-07-21
The automatic segmentation of MS lesions could reduce time required for image processing together with inter- and intraoperator variability for research and clinical trials. A multicenter validation of a proposed semiautomatic method for hyperintense MS lesion segmentation on dual-echo MR imaging is presented. The classification technique used is based on a region-growing approach starting from manual lesion identification by an expert observer with a final segmentation-refinement step. The method was validated in a cohort of 52 patients with relapsing-remitting MS, with dual-echo images acquired in 6 different European centers. We found a mathematic expression that made the optimization of the method independent of the need for a training dataset. The automatic segmentation was in good agreement with the manual segmentation (dice similarity coefficient = 0.62 and root mean square error = 2 mL). Assessment of the segmentation errors showed no significant differences in algorithm performance between the different MR scanner manufacturers (P > .05). The method proved to be robust, and no center-specific training of the algorithm was required, offering the possibility for application in a clinical setting. Adoption of the method should lead to improved reliability and less operator time required for image analysis in research and clinical trials in MS. © 2016 American Society of Neuroradiology.
Wound size measurement of lower extremity ulcers using segmentation algorithms
NASA Astrophysics Data System (ADS)
Dadkhah, Arash; Pang, Xing; Solis, Elizabeth; Fang, Ruogu; Godavarty, Anuradha
2016-03-01
Lower extremity ulcers are one of the most common complications that not only affect many people around the world but also have huge impact on economy since a large amount of resources are spent for treatment and prevention of the diseases. Clinical studies have shown that reduction in the wound size of 40% within 4 weeks is an acceptable progress in the healing process. Quantification of the wound size plays a crucial role in assessing the extent of healing and determining the treatment process. To date, wound healing is visually inspected and the wound size is measured from surface images. The extent of wound healing internally may vary from the surface. A near-infrared (NIR) optical imaging approach has been developed for non-contact imaging of wounds internally and differentiating healing from non-healing wounds. Herein, quantitative wound size measurements from NIR and white light images are estimated using a graph cuts and region growing image segmentation algorithms. The extent of the wound healing from NIR imaging of lower extremity ulcers in diabetic subjects are quantified and compared across NIR and white light images. NIR imaging and wound size measurements can play a significant role in potentially predicting the extent of internal healing, thus allowing better treatment plans when implemented for periodic imaging in future.
Predicting Major Solar Eruptions
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-05-01
Coronal mass ejections (CMEs) and solar flares are two examples of major explosions from the surface of the Sun but theyre not the same thing, and they dont have to happen at the same time. A recent study examines whether we can predict which solar flares will be closely followed by larger-scale CMEs.Image of a solar flare from May 2013, as captured by NASAs Solar Dynamics Observatory. [NASA/SDO]Flares as a Precursor?A solar flare is a localized burst of energy and X-rays, whereas a CME is an enormous cloud of magnetic flux and plasma released from the Sun. We know that some magnetic activity on the surface of the Sun triggers both a flare and a CME, whereas other activity only triggers a confined flare with no CME.But what makes the difference? Understanding this can help us learn about the underlying physical drivers of flares and CMEs. It also might help us to better predict when a CME which can pose a risk to astronauts, disrupt radio transmissions, and cause damage to satellites might occur.In a recent study, Monica Bobra and Stathis Ilonidis (Stanford University) attempt to improve our ability to make these predictions by using a machine-learning algorithm.Classification by ComputerUsing a combination of 6 or more features results in a much better predictive success (measured by the True Skill Statistic; higher positive value = better prediction) for whether a flare will be accompanied by a CME. [Bobra Ilonidis 2016]Bobra and Ilonidis used magnetic-field data from an instrument on the Solar Dynamics Observatory to build a catalog of solar flares, 56 of which were accompanied by a CME and 364 of which were not. The catalog includes information about 18 different features associated with the photospheric magnetic field of each flaring active region (for example, the mean gradient of the horizontal magnetic field).The authors apply a machine-learning algorithm known as a binary classifier to this catalog. This algorithm tries to predict, given a set of features, whether an active region that produces a flare will also produce a CME. Bobra and Ilonidis then use a feature-selection algorithm to try to understand which features distinguish between flaring regions that dont produce a CME and those that do.Predictors of CMEsThe authors reach several interesting conclusions:Under the right conditions, their algorithm is able to predict whether an active region with a given set of features will produce a CME as well as a flare with a fairly high rate of success.None of the 18 features they tested are good predictors in isolation: its necessary to look at a combination of at least 6 features to have success predicting whether a flare will be accompanied by a CME.The features that are the best predictors are all intensive features ones that stay the same independent of the active regions size. Extensive features ones that change as the active region grows or shrinks are less successful predictors.Only the magnetic field properties of the photosphere were considered, so a logical next step is to extend this study to consider properties of the solar corona above active regions as well. In the meantime, these are interesting first results that may well help us better predict these major solar eruptions.BonusCheck out this video for a great description from NASA of the difference between solar flares and CMEs (as well as some awesome observations of both).CitationM. G. Bobra and S. Ilonidis 2016 ApJ 821 127. doi:10.3847/0004-637X/821/2/127
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
A hierarchical transition state search algorithm
NASA Astrophysics Data System (ADS)
del Campo, Jorge M.; Köster, Andreas M.
2008-07-01
A hierarchical transition state search algorithm is developed and its implementation in the density functional theory program deMon2k is described. This search algorithm combines the double ended saddle interpolation method with local uphill trust region optimization. A new formalism for the incorporation of the distance constrain in the saddle interpolation method is derived. The similarities between the constrained optimizations in the local trust region method and the saddle interpolation are highlighted. The saddle interpolation and local uphill trust region optimizations are validated on a test set of 28 representative reactions. The hierarchical transition state search algorithm is applied to an intramolecular Diels-Alder reaction with several internal rotors, which makes automatic transition state search rather challenging. The obtained reaction mechanism is discussed in the context of the experimentally observed product distribution.
Muñoz, Mario A; Smith-Miles, Kate A
2017-01-01
This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.
NASA Technical Reports Server (NTRS)
Essias, Wayne E.; Abbott, Mark; Carder, Kendall; Campbell, Janet; Clark, Dennis; Evans, Robert; Brown, Otis; Kearns, Ed; Kilpatrick, Kay; Balch, W.
2003-01-01
Simplistic models relating global satellite ocean color, temperature, and light to ocean net primary production (ONPP) are sensitive to the accuracy and limitations of the satellite estimate of chlorophyll and other input fields, as well as the primary productivity model. The standard MODIS ONPP product uses the new semi-analytic chlorophyll algorithm as its input for two ONPP indexes. The three primary MODIS chlorophyll Q estimates from MODIS, as well as the SeaWiFS 4 chlorophyll product, were used to assess global and regional performance in estimating ONPP for the full mission, but concentrating on 2001. The two standard ONPP algorithms were examined with 8-day and 39 kilometer resolution to quantify chlorophyll algorithm dependency of ONPP. Ancillary data (MLD from FNMOC, MODIS SSTD1, and PAR from the GSFC DAO) were identical. The standard MODIS ONPP estimates for annual production in 2001 was 59 and 58 GT C for the two ONPP algorithms. Differences in ONPP using alternate chlorophylls were on the order of 10% for global annual ONPP, but ranged to 100% regionally. On all scales the differences in ONPP were smaller between MODIS and SeaWiFS than between ONPP models, or among chlorophyll algorithms within MODIS. Largest regional ONPP differences were found in the Southern Ocean (SO). In the SO, application of the semi-analytic chlorophyll resulted in not only a magnitude difference in ONPP (2x), but also a temporal shift in the time of maximum production compared to empirical algorithms when summed over standard oceanic areas. The resulting increase in global ONPP (6-7 GT) is supported by better performance of the semi-analytic chlorophyll in the SO and other high chlorophyll regions. The differences are significant in terms of understanding regional differences and dynamics of ocean carbon transformations.
Li, Hong; Liu, Mingyong; Zhang, Feihu
2017-01-01
This paper presents a multi-objective evolutionary algorithm of bio-inspired geomagnetic navigation for Autonomous Underwater Vehicle (AUV). Inspired by the biological navigation behavior, the solution was proposed without using a priori information, simply by magnetotaxis searching. However, the existence of the geomagnetic anomalies has significant influence on the geomagnetic navigation system, which often disrupts the distribution of the geomagnetic field. An extreme value region may easily appear in abnormal regions, which makes AUV lost in the navigation phase. This paper proposes an improved bio-inspired algorithm with behavior constraints, for sake of making AUV escape from the abnormal region. First, the navigation problem is considered as the optimization problem. Second, the environmental monitoring operator is introduced, to determine whether the algorithm falls into the geomagnetic anomaly region. Then, the behavior constraint operator is employed to get out of the abnormal region. Finally, the termination condition is triggered. Compared to the state-of- the-art, the proposed approach effectively overcomes the disturbance of the geomagnetic abnormal. The simulation result demonstrates the reliability and feasibility of the proposed approach in complex environments.
Li, Hong; Liu, Mingyong; Zhang, Feihu
2017-01-01
This paper presents a multi-objective evolutionary algorithm of bio-inspired geomagnetic navigation for Autonomous Underwater Vehicle (AUV). Inspired by the biological navigation behavior, the solution was proposed without using a priori information, simply by magnetotaxis searching. However, the existence of the geomagnetic anomalies has significant influence on the geomagnetic navigation system, which often disrupts the distribution of the geomagnetic field. An extreme value region may easily appear in abnormal regions, which makes AUV lost in the navigation phase. This paper proposes an improved bio-inspired algorithm with behavior constraints, for sake of making AUV escape from the abnormal region. First, the navigation problem is considered as the optimization problem. Second, the environmental monitoring operator is introduced, to determine whether the algorithm falls into the geomagnetic anomaly region. Then, the behavior constraint operator is employed to get out of the abnormal region. Finally, the termination condition is triggered. Compared to the state-of- the-art, the proposed approach effectively overcomes the disturbance of the geomagnetic abnormal. The simulation result demonstrates the reliability and feasibility of the proposed approach in complex environments. PMID:28747884
Pyramid algorithms as models of human cognition
NASA Astrophysics Data System (ADS)
Pizlo, Zygmunt; Li, Zheng
2003-06-01
There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.
Options for Parallelizing a Planning and Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.
2011-01-01
Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.
The status of timber resources in the North Central United States
Neal H. Sullivan; Stephen R. Shifley
2003-01-01
Between 1953 and 1997 the volume of standing timber in the region (growing stock) more than doubled from 37 to 83 billion cubic feet. Forests in the North Central Region grow 2.3 billion cubic feet of new wood on growing-stock trees each year. Annual removals are about half that amount. The pattern is the same in each of the seven included states (Minnesota, Wisconsin...
Agriculturally Relevant Climate Extremes and Their Trends in the World's Major Growing Regions
NASA Astrophysics Data System (ADS)
Zhu, Xiao; Troy, Tara J.
2018-04-01
Climate extremes can negatively impact crop production, and climate change is expected to affect the frequency and severity of extremes. Using a combination of in situ station measurements (Global Historical Climatology Network's Daily data set) and multiple other gridded data products, a derived 1° data set of growing season climate indices and extremes is compiled over the major growing regions for maize, wheat, soybean, and rice for 1951-2006. This data set contains growing season climate indices that are agriculturally relevant, such as the number of hot days, duration of dry spells, and rainfall intensity. Before 1980, temperature-related indices had few trends; after 1980, statistically significant warming trends exist for each crop in the majority of growing regions. In particular, crops have increasingly been exposed to extreme hot temperatures, above which yields have been shown to decline. Rainfall trends are less consistent compared to temperature, with some regions receiving more rainfall and others less. Anomalous temperature and precipitation conditions are shown to often occur concurrently, with dry growing seasons more likely to be hotter, have larger drought indices, and have larger vapor pressure deficits. This leads to the confluence of a variety of climate conditions that negatively impact crop yields. These results show a consistent increase in global agricultural exposure to negative climate conditions since 1980.
Luo, Junhai; Fu, Liang
2017-06-09
With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.
Analysis of MAIAC Dust Aerosol Retrievals from MODIS Over North Africa
NASA Technical Reports Server (NTRS)
Lyapustin, A.; Wang, Y.; Hsu, C.; Torres, O.; Leptoukh, G.; Kalashnikova, O.; Korkin, S.
2011-01-01
An initial comparison of aerosol optical thickness over North Africa for year 2007 was performed between the Deep Blue and Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithms complimented with MISR and OMI data. The new MAIAC algorithm has a better sensitivity to the small dust storms than the DB algorithm, but it also has biases in the brightest desert regions indicating the need for improvement. The quarterly averaged AOT values in the Bodele depression and western downwind transport region show a good agreement among MAIAC, MISR and OMI data, while the DB algorithm shows a somewhat different seasonality.
FluBreaks: early epidemic detection from Google flu trends.
Pervaiz, Fahad; Pervaiz, Mansoor; Abdur Rehman, Nabeel; Saif, Umar
2012-10-04
The Google Flu Trends service was launched in 2008 to track changes in the volume of online search queries related to flu-like symptoms. Over the last few years, the trend data produced by this service has shown a consistent relationship with the actual number of flu reports collected by the US Centers for Disease Control and Prevention (CDC), often identifying increases in flu cases weeks in advance of CDC records. However, contrary to popular belief, Google Flu Trends is not an early epidemic detection system. Instead, it is designed as a baseline indicator of the trend, or changes, in the number of disease cases. To evaluate whether these trends can be used as a basis for an early warning system for epidemics. We present the first detailed algorithmic analysis of how Google Flu Trends can be used as a basis for building a fully automated system for early warning of epidemics in advance of methods used by the CDC. Based on our work, we present a novel early epidemic detection system, called FluBreaks (dritte.org/flubreaks), based on Google Flu Trends data. We compared the accuracy and practicality of three types of algorithms: normal distribution algorithms, Poisson distribution algorithms, and negative binomial distribution algorithms. We explored the relative merits of these methods, and related our findings to changes in Internet penetration and population size for the regions in Google Flu Trends providing data. Across our performance metrics of percentage true-positives (RTP), percentage false-positives (RFP), percentage overlap (OT), and percentage early alarms (EA), Poisson- and negative binomial-based algorithms performed better in all except RFP. Poisson-based algorithms had average values of 99%, 28%, 71%, and 76% for RTP, RFP, OT, and EA, respectively, whereas negative binomial-based algorithms had average values of 97.8%, 17.8%, 60%, and 55% for RTP, RFP, OT, and EA, respectively. Moreover, the EA was also affected by the region's population size. Regions with larger populations (regions 4 and 6) had higher values of EA than region 10 (which had the smallest population) for negative binomial- and Poisson-based algorithms. The difference was 12.5% and 13.5% on average in negative binomial- and Poisson-based algorithms, respectively. We present the first detailed comparative analysis of popular early epidemic detection algorithms on Google Flu Trends data. We note that realizing this opportunity requires moving beyond the cumulative sum and historical limits method-based normal distribution approaches, traditionally employed by the CDC, to negative binomial- and Poisson-based algorithms to deal with potentially noisy search query data from regions with varying population and Internet penetrations. Based on our work, we have developed FluBreaks, an early warning system for flu epidemics using Google Flu Trends.
A hybrid algorithm for coupling partial differential equation and compartment-based dynamics.
Harrison, Jonathan U; Yates, Christian A
2016-09-01
Stochastic simulation methods can be applied successfully to model exact spatio-temporally resolved reaction-diffusion systems. However, in many cases, these methods can quickly become extremely computationally intensive with increasing particle numbers. An alternative description of many of these systems can be derived in the diffusive limit as a deterministic, continuum system of partial differential equations (PDEs). Although the numerical solution of such PDEs is, in general, much more efficient than the full stochastic simulation, the deterministic continuum description is generally not valid when copy numbers are low and stochastic effects dominate. Therefore, to take advantage of the benefits of both of these types of models, each of which may be appropriate in different parts of a spatial domain, we have developed an algorithm that can be used to couple these two types of model together. This hybrid coupling algorithm uses an overlap region between the two modelling regimes. By coupling fluxes at one end of the interface and using a concentration-matching condition at the other end, we ensure that mass is appropriately transferred between PDE- and compartment-based regimes. Our methodology gives notable reductions in simulation time in comparison with using a fully stochastic model, while maintaining the important stochastic features of the system and providing detail in appropriate areas of the domain. We test our hybrid methodology robustly by applying it to several biologically motivated problems including diffusion and morphogen gradient formation. Our analysis shows that the resulting error is small, unbiased and does not grow over time. © 2016 The Authors.
A hybrid algorithm for coupling partial differential equation and compartment-based dynamics
Yates, Christian A.
2016-01-01
Stochastic simulation methods can be applied successfully to model exact spatio-temporally resolved reaction–diffusion systems. However, in many cases, these methods can quickly become extremely computationally intensive with increasing particle numbers. An alternative description of many of these systems can be derived in the diffusive limit as a deterministic, continuum system of partial differential equations (PDEs). Although the numerical solution of such PDEs is, in general, much more efficient than the full stochastic simulation, the deterministic continuum description is generally not valid when copy numbers are low and stochastic effects dominate. Therefore, to take advantage of the benefits of both of these types of models, each of which may be appropriate in different parts of a spatial domain, we have developed an algorithm that can be used to couple these two types of model together. This hybrid coupling algorithm uses an overlap region between the two modelling regimes. By coupling fluxes at one end of the interface and using a concentration-matching condition at the other end, we ensure that mass is appropriately transferred between PDE- and compartment-based regimes. Our methodology gives notable reductions in simulation time in comparison with using a fully stochastic model, while maintaining the important stochastic features of the system and providing detail in appropriate areas of the domain. We test our hybrid methodology robustly by applying it to several biologically motivated problems including diffusion and morphogen gradient formation. Our analysis shows that the resulting error is small, unbiased and does not grow over time. PMID:27628171
NASA Astrophysics Data System (ADS)
Aiello, Martina; Gianinetto, Marco
2017-10-01
Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.
Segmentation and detection of fluorescent 3D spots.
Ram, Sundaresh; Rodríguez, Jeffrey J; Bosco, Giovanni
2012-03-01
The 3D spatial organization of genes and other genetic elements within the nucleus is important for regulating gene expression. Understanding how this spatial organization is established and maintained throughout the life of a cell is key to elucidating the many layers of gene regulation. Quantitative methods for studying nuclear organization will lead to insights into the molecular mechanisms that maintain gene organization as well as serve as diagnostic tools for pathologies caused by loss of nuclear structure. However, biologists currently lack automated and high throughput methods for quantitative and qualitative global analysis of 3D gene organization. In this study, we use confocal microscopy and fluorescence in-situ hybridization (FISH) as a cytogenetic technique to detect and localize the presence of specific DNA sequences in 3D. FISH uses probes that bind to specific targeted locations on the chromosomes, appearing as fluorescent spots in 3D images obtained using fluorescence microscopy. In this article, we propose an automated algorithm for segmentation and detection of 3D FISH spots. The algorithm is divided into two stages: spot segmentation and spot detection. Spot segmentation consists of 3D anisotropic smoothing to reduce the effect of noise, top-hat filtering, and intensity thresholding, followed by 3D region-growing. Spot detection uses a Bayesian classifier with spot features such as volume, average intensity, texture, and contrast to detect and classify the segmented spots as either true or false spots. Quantitative assessment of the proposed algorithm demonstrates improved segmentation and detection accuracy compared to other techniques. Copyright © 2012 International Society for Advancement of Cytometry.
Hierarchical layered and semantic-based image segmentation using ergodicity map
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing
2010-04-01
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.
MRI brain tumor segmentation based on improved fuzzy c-means method
NASA Astrophysics Data System (ADS)
Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo
2009-10-01
This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.
Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video
Lee, Gil-beom; Lee, Myeong-jin; Lee, Woo-Kyung; Park, Joo-heon; Kim, Tae-Hwan
2017-01-01
Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos. PMID:28327515
NASA Astrophysics Data System (ADS)
Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron
2005-04-01
Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.
The MINERVA Software Development Process
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.
2017-01-01
This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.
SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Bailey, Sean W.; Shea, Donald M.; Feldman, Gene C.
2014-01-01
In clear shallow waters, light that is transmitted downward through the water column can reflect off the sea floor and thereby influence the water-leaving radiance signal. This effect can confound contemporary ocean color algorithms designed for deep waters where the seafloor has little or no effect on the water-leaving radiance. Thus, inappropriate use of deep water ocean color algorithms in optically shallow regions can lead to inaccurate retrievals of inherent optical properties (IOPs) and therefore have a detrimental impact on IOP-based estimates of marine parameters, including chlorophyll-a and the diffuse attenuation coefficient. In order to improve IOP retrievals in optically shallow regions, a semi-analytical inversion algorithm, the Shallow Water Inversion Model (SWIM), has been developed. Unlike established ocean color algorithms, SWIM considers both the water column depth and the benthic albedo. A radiative transfer study was conducted that demonstrated how SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Properties algorithm (GIOP) and Quasi-Analytical Algorithm (QAA), performed in optically deep and shallow scenarios. The results showed that SWIM performed well, whilst both GIOP and QAA showed distinct positive bias in IOP retrievals in optically shallow waters. The SWIM algorithm was also applied to a test region: the Great Barrier Reef, Australia. Using a single test scene and time series data collected by NASA's MODIS-Aqua sensor (2002-2013), a comparison of IOPs retrieved by SWIM, GIOP and QAA was conducted.
Parallel/distributed direct method for solving linear systems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.
Stride search: A general algorithm for storm detection in high-resolution climate data
Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; ...
2016-04-13
This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less
NASA Astrophysics Data System (ADS)
Morgenthaler, George; Khatib, Nader; Kim, Byoungsoo
with information to improve their crop's vigor has been a major topic of interest. With world population growing exponentially, arable land being consumed by urbanization, and an unfavorable farm economy, the efficiency of farming must increase to meet future food requirements and to make farming a sustainable occupation for the farmer. "Precision Agriculture" refers to a farming methodology that applies nutrients and moisture only where and when they are needed in the field. The goal is to increase farm revenue by increasing crop yield and decreasing applications of costly chemical and water treatments. In addition, this methodology will decrease the environmental costs of farming, i.e., reduce air, soil, and water pollution. Sensing/Precision Agriculture has not grown as rapidly as early advocates envisioned. Technology for a successful Remote Sensing/Precision Agriculture system is now available. Commercial satellite systems can image (multi-spectral) the Earth with a resolution of approximately 2.5 m. Variable precision dispensing systems using GPS are available and affordable. Crop models that predict yield as a function of soil, chemical, and irrigation parameter levels have been formulated. Personal computers and internet access are in place in most farm homes and can provide a mechanism to periodically disseminate, e.g. bi-weekly, advice on what quantities of water and chemicals are needed in individual regions of the field. What is missing is a model that fuses the disparate sources of information on the current states of the crop and soil, and the remaining resource levels available with the decisions farmers are required to make. This must be a product that is easy for the farmer to understand and to implement. A "Constrained Optimization Feed-back Control Model" to fill this void will be presented. The objective function of the model will be used to maximize the farmer's profit by increasing yields while decreasing environmental costs and decreasing application of costly treatments. This model will incorporate information from remote sensing, in-situ weather sources, soil measurements, crop models, and tacit farmer knowledge of the relative productivity of the selected control regions of the farm to provide incremental advice throughout the growing season on water and chemical treatments. Genetic and meta-heuristic algorithms will be used to solve the constrained optimization problem that possesses complex constraints and a non-linear objective function. *
NASA Astrophysics Data System (ADS)
DaCamara, Carlos; Libonati, Renata; Calado, Teresa; Ermida, Sofia; Nunes, Sílvia
2017-04-01
The use of remotely sensed information for burned area detection is well established and there is a general consensus about its usefulness from global down to regional levels. In this particular, the combined use of near and middle infrared (NIR and MIR) channels has shown to be particularly suitable to discriminate burned areas in a variety of ecosystems. The so-called (V,W) system [1,2] is a burn-sensitive vegetation index system defined in a transformed NIR-MIR space that has proven to be capable of discriminating burned pixels in the Brazilian biomes. A procedure based on the (V,W) system is here presented that allows discriminating burned areas and dating burning events. The procedure is tested over Portugal using NIR and MIR data from the Terra/Aqua MODIS Level 1B 1 km V5 product (MOD021/MYD021) together with active fire data from the MODIS V5 product Thermal Anomalies/Fire 5-Min L2 Swath 1km (MOD14/ MYD14). First monthly minimum composites of W are computed for July and August 2015. Burned pixels are then identified as the ones that are located close to hot spots (detected during August) and that present low values of composited minimum of W in August (characteristic of a burning event) together with a sharp decrease of composited minimum of W from July to August (that is expected to occur after a burning event). Burned pixels are then successively identified by a seeded region-growing algorithm. The day of burning of each pixel classified as burned is finally identified as the one that maximizes an index of temporal separability computed along the respective time series of available values of W in August. Results obtained are validated using as reference burned scars and dates as identified by the Rapid Damage Assessment (RDA) module developed by the European Forest Fire Information System (EFFIS); the EFFIS mapping process consists of an unsupervised procedure that uses MODIS bands at 250 m resolution combined with information from the CORINE Land Cover, followed by a seeded region-growing algorithm [3]. Almost half (49%) of the burned pixels are correctly identified, less than one fifth (18%) are false alarms and the total burned area is overestimated by 18%. On the other hand more than three fourths (76%) of estimated days of burning presented deviations from reference data between -1 and 4 days. Performance of the proposed algorithm is to be viewed as highly satisfactory taking into account the coarser resolution of the procedure being validated (1 km) compared to the reference data (250 m). Research was performed in the framework of FAPESP/FCT BrFLAS Project and of LSA SAF. [1] Libonati et al. (2011), Remote Sensing of Environment, 115(6), 1464-1477. [2] DaCamara et al. (2016), IEEE Geoscience and Remote Sensing Letters 13(12), 1822-1826. [3] Salvador Civil & San-Miguel-Ayanz (2002), International Journal of Remote Sensing, 23(6), 1197-1205.
Scattering properties of electromagnetic waves from metal object in the lower terahertz region
NASA Astrophysics Data System (ADS)
Chen, Gang; Dang, H. X.; Hu, T. Y.; Su, Xiang; Lv, R. C.; Li, Hao; Tan, X. M.; Cui, T. J.
2018-01-01
An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of metal objects in the lower terahertz (THz) frequency. The metal object can be viewed as perfectly electrical conducting object with a slightly rough surface in the lower THz region. Hence the THz scattered field from metal object can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are combined to compute the coherent part; while the small perturbation method is used for the incoherent part. With the MonteCarlo method, the radar cross section of the rough metal surface is computed by the multilevel fast multipole algorithm and the proposed hybrid algorithm, respectively. The numerical results show that the proposed algorithm has good accuracy to simulate the scattering properties rapidly in the lower THz region.
Regional regularization method for ECT based on spectral transformation of Laplacian
NASA Astrophysics Data System (ADS)
Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.
2016-10-01
Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
NASA Technical Reports Server (NTRS)
Madyastha, Raghavendra K.; Aazhang, Behnaam; Henson, Troy F.; Huxhold, Wendy L.
1992-01-01
This paper addresses the issue of applying a globally convergent optimization algorithm to the training of multilayer perceptrons, a class of Artificial Neural Networks. The multilayer perceptrons are trained towards the solution of two highly nonlinear problems: (1) signal detection in a multi-user communication network, and (2) solving the inverse kinematics for a robotic manipulator. The research is motivated by the fact that a multilayer perceptron is theoretically capable of approximating any nonlinear function to within a specified accuracy. The algorithm that has been employed in this study combines the merits of two well known optimization algorithms, the Conjugate Gradients and the Trust Regions Algorithms. The performance is compared to a widely used algorithm, the Backpropagation Algorithm, that is basically a gradient-based algorithm, and hence, slow in converging. The performances of the two algorithms are compared with the convergence rate. Furthermore, in the case of the signal detection problem, performances are also benchmarked by the decision boundaries drawn as well as the probability of error obtained in either case.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu
2017-05-01
In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.
Improving semi-automated segmentation by integrating learning with active sampling
NASA Astrophysics Data System (ADS)
Huo, Jing; Okada, Kazunori; Brown, Matthew
2012-02-01
Interactive segmentation algorithms such as GrowCut usually require quite a few user interactions to perform well, and have poor repeatability. In this study, we developed a novel technique to boost the performance of the interactive segmentation method GrowCut involving: 1) a novel "focused sampling" approach for supervised learning, as opposed to conventional random sampling; 2) boosting GrowCut using the machine learned results. We applied the proposed technique to the glioblastoma multiforme (GBM) brain tumor segmentation, and evaluated on a dataset of ten cases from a multiple center pharmaceutical drug trial. The results showed that the proposed system has the potential to reduce user interaction while maintaining similar segmentation accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.
This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less
DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.
Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A
2017-01-01
Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.
Distributed-Memory Fast Maximal Independent Set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew
The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less
Ant colony optimization algorithm for signal coordination of oversaturated traffic networks.
DOT National Transportation Integrated Search
2010-05-01
Traffic congestion is a daily and growing problem of the modern era in mostly all major cities in the world. : Increasing traffic demand strains the existing transportation system, leading to oversaturated network : conditions, especially at peak hou...
NASA Astrophysics Data System (ADS)
Imani Masouleh, Mehdi; Limebeer, David J. N.
2018-07-01
In this study we will estimate the region of attraction (RoA) of the lateral dynamics of a nonlinear single-track vehicle model. The tyre forces are approximated using rational functions that are shown to capture the nonlinearities of tyre curves significantly better than polynomial functions. An existing sum-of-squares (SOS) programming algorithm for estimating regions of attraction is extended to accommodate the use of rational vector fields. This algorithm is then used to find an estimate of the RoA of the vehicle lateral dynamics. The influence of vehicle parameters and driving conditions on the stability region are studied. It is shown that SOS programming techniques can be used to approximate the stability region without resorting to numerical integration. The RoA estimate from the SOS algorithm is compared to the existing results in the literature. The proposed method is shown to obtain significantly better RoA estimates.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2017-12-01
Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from the Moderate Resolution Imaging Spectroradiometer (MODIS).
Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery
NASA Astrophysics Data System (ADS)
Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.
2016-12-01
Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.
Zhou, Yuting; Xiao, Xiangming; Qin, Yuanwei; Dong, Jinwei; Zhang, Geli; Kou, Weili; Jin, Cui; Wang, Jie; Li, Xiangping
2016-01-01
Accurate and up-to-date information on the spatial distribution of paddy rice fields is necessary for the studies of trace gas emissions, water source management, and food security. The phenology-based paddy rice mapping algorithm, which identifies the unique flooding stage of paddy rice, has been widely used. However, identification and mapping of paddy rice in rice-wetland coexistent areas is still a challenging task. In this study, we found that the flooding/transplanting periods of paddy rice and natural wetlands were different. The natural wetlands flood earlier and have a shorter duration than paddy rice in the Panjin Plain, a temperate region in China. We used this asynchronous flooding stage to extract the paddy rice planting area from the rice-wetland coexistent area. MODIS Land Surface Temperature (LST) data was used to derive the temperature-defined plant growing season. Landsat 8 OLI imagery was used to detect the flooding signal and then paddy rice was extracted using the difference in flooding stages between paddy rice and natural wetlands. The resultant paddy rice map was evaluated with in-situ ground-truth data and Google Earth images. The estimated overall accuracy and Kappa coefficient were 95% and 0.90, respectively. The spatial pattern of OLI-derived paddy rice map agrees well with the paddy rice layer from the National Land Cover Dataset from 2010 (NLCD-2010). The differences between RiceLandsat and RiceNLCD are in the range of ±20% for most 1-km grid cell. The results of this study demonstrate the potential of the phenology-based paddy rice mapping algorithm, via integrating MODIS and Landsat 8 OLI images, to map paddy rice fields in complex landscapes of paddy rice and natural wetland in the temperate region. PMID:27688742
Zhou, Yuting; Xiao, Xiangming; Qin, Yuanwei; Dong, Jinwei; Zhang, Geli; Kou, Weili; Jin, Cui; Wang, Jie; Li, Xiangping
2016-04-01
Accurate and up-to-date information on the spatial distribution of paddy rice fields is necessary for the studies of trace gas emissions, water source management, and food security. The phenology-based paddy rice mapping algorithm, which identifies the unique flooding stage of paddy rice, has been widely used. However, identification and mapping of paddy rice in rice-wetland coexistent areas is still a challenging task. In this study, we found that the flooding/transplanting periods of paddy rice and natural wetlands were different. The natural wetlands flood earlier and have a shorter duration than paddy rice in the Panjin Plain, a temperate region in China. We used this asynchronous flooding stage to extract the paddy rice planting area from the rice-wetland coexistent area. MODIS Land Surface Temperature (LST) data was used to derive the temperature-defined plant growing season. Landsat 8 OLI imagery was used to detect the flooding signal and then paddy rice was extracted using the difference in flooding stages between paddy rice and natural wetlands. The resultant paddy rice map was evaluated with in-situ ground-truth data and Google Earth images. The estimated overall accuracy and Kappa coefficient were 95% and 0.90, respectively. The spatial pattern of OLI-derived paddy rice map agrees well with the paddy rice layer from the National Land Cover Dataset from 2010 (NLCD-2010). The differences between Rice Landsat and Rice NLCD are in the range of ±20% for most 1-km grid cell. The results of this study demonstrate the potential of the phenology-based paddy rice mapping algorithm, via integrating MODIS and Landsat 8 OLI images, to map paddy rice fields in complex landscapes of paddy rice and natural wetland in the temperate region.
Colour image segmentation using unsupervised clustering technique for acute leukemia images
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.
2015-05-01
Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.
NASA Astrophysics Data System (ADS)
Zhao, Yan-Ru; Yu, Ke-Qiang; Li, Xiaoli; He, Yong
2016-12-01
Infected petals are often regarded as the source for the spread of fungi Sclerotinia sclerotiorum in all growing process of rapeseed (Brassica napus L.) plants. This research aimed to detect fungal infection of rapeseed petals by applying hyperspectral imaging in the spectral region of 874-1734 nm coupled with chemometrics. Reflectance was extracted from regions of interest (ROIs) in the hyperspectral image of each sample. Firstly, principal component analysis (PCA) was applied to conduct a cluster analysis with the first several principal components (PCs). Then, two methods including X-loadings of PCA and random frog (RF) algorithm were used and compared for optimizing wavebands selection. Least squares-support vector machine (LS-SVM) methodology was employed to establish discriminative models based on the optimal and full wavebands. Finally, area under the receiver operating characteristics curve (AUC) was utilized to evaluate classification performance of these LS-SVM models. It was found that LS-SVM based on the combination of all optimal wavebands had the best performance with AUC of 0.929. These results were promising and demonstrated the potential of applying hyperspectral imaging in fungus infection detection on rapeseed petals.
Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest
NASA Astrophysics Data System (ADS)
Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David
2009-02-01
Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
Adaboost multi-view face detection based on YCgCr skin color model
NASA Astrophysics Data System (ADS)
Lan, Qi; Xu, Zhiyong
2016-09-01
Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.
Distributed Optimal Power Flow of AC/DC Interconnected Power Grid Using Synchronous ADMM
NASA Astrophysics Data System (ADS)
Liang, Zijun; Lin, Shunjiang; Liu, Mingbo
2017-05-01
Distributed optimal power flow (OPF) is of great importance and challenge to AC/DC interconnected power grid with different dispatching centres, considering the security and privacy of information transmission. In this paper, a fully distributed algorithm for OPF problem of AC/DC interconnected power grid called synchronous ADMM is proposed, and it requires no form of central controller. The algorithm is based on the fundamental alternating direction multiplier method (ADMM), by using the average value of boundary variables of adjacent regions obtained from current iteration as the reference values of both regions for next iteration, which realizes the parallel computation among different regions. The algorithm is tested with the IEEE 11-bus AC/DC interconnected power grid, and by comparing the results with centralized algorithm, we find it nearly no differences, and its correctness and effectiveness can be validated.
Enhanced Deep Blue Aerosol Retrieval Algorithm: The Second Generation
NASA Technical Reports Server (NTRS)
Hsu, N. C.; Jeong, M.-J.; Bettenhausen, C.; Sayer, A. M.; Hansell, R.; Seftor, C. S.; Huang, J.; Tsay, S.-C.
2013-01-01
The aerosol products retrieved using the MODIS collection 5.1 Deep Blue algorithm have provided useful information about aerosol properties over bright-reflecting land surfaces, such as desert, semi-arid, and urban regions. However, many components of the C5.1 retrieval algorithm needed to be improved; for example, the use of a static surface database to estimate surface reflectances. This is particularly important over regions of mixed vegetated and non- vegetated surfaces, which may undergo strong seasonal changes in land cover. In order to address this issue, we develop a hybrid approach, which takes advantage of the combination of pre-calculated surface reflectance database and normalized difference vegetation index in determining the surface reflectance for aerosol retrievals. As a result, the spatial coverage of aerosol data generated by the enhanced Deep Blue algorithm has been extended from the arid and semi-arid regions to the entire land areas.
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less
On the Deduction of Galactic Abundances with Evolutionary Neural Networks
NASA Astrophysics Data System (ADS)
Taylor, M.; Diaz, A. I.
2007-12-01
A growing number of indicators are now being used with some confidence to measure the metallicity(Z) of photoionisation regions in planetary nebulae, galactic HII regions(GHIIRs), extra-galactic HII regions(EGHIIRs) and HII galaxies(HIIGs). However, a universal indicator valid also at high metallicities has yet to be found. Here, we report on a new artificial intelligence-based approach to determine metallicity indicators that shows promise for the provision of improved empirical fits. The method hinges on the application of an evolutionary neural network to observational emission line data. The network's DNA, encoded in its architecture, weights and neuron transfer functions, is evolved using a genetic algorithm. Furthermore, selection, operating on a set of 10 distinct neuron transfer functions, means that the empirical relation encoded in the network solution architecture is in functional rather than numerical form. Thus the network solutions provide an equation for the metallicity in terms of line ratios without a priori assumptions. Tapping into the mathematical power offered by this approach, we applied the network to detailed observations of both nebula and auroral emission lines from 0.33μ m-1μ m for a sample of 96 HII-type regions and we were able to obtain an empirical relation between Z and S_{23} with a dispersion of only 0.16 dex. We show how the method can be used to identify new diagnostics as well as the nonlinear relationship supposed to exist between the metallicity Z, ionisation parameter U and effective (or equivalent) temperature T*.
NASA Astrophysics Data System (ADS)
Tamez-Peña, José G.; Barbu-McInnis, Monica; Totterman, Saara
2006-03-01
Abnormal MR findings including cartilage defects, cartilage denuded areas, osteophytes, and bone marrow edema (BME) are used in staging and evaluating the degree of osteoarthritis (OA) in the knee. The locations of the abnormal findings have been correlated to the degree of pain and stiffness of the joint in the same location. The definition of the anatomic region in MR images is not always an objective task, due to the lack of clear anatomical features. This uncertainty causes variance in the location of the abnormality between readers and time points. Therefore, it is important to have a reproducible system to define the anatomic regions. This works present a computerized approach to define the different anatomic knee regions. The approach is based on an algorithm that uses unique features of the femur and its spatial relation in the extended knee. The femur features are found from three dimensional segmentation maps of the knee. From the segmentation maps, the algorithm automatically divides the femur cartilage into five anatomic regions: trochlea, medial weight bearing area, lateral weight bearing area, posterior medial femoral condyle, and posterior lateral femoral condyle. Furthermore, the algorithm automatically labels the medial and lateral tibia cartilage. The unsupervised definition of the knee regions allows a reproducible way to evaluate regional OA changes. This works will present the application of this automated algorithm for the regional analysis of the cartilage tissue.
Evaluation of Bio-optical Algorithms for Chlorophyll Mapping in the Southwestern Atlantic
NASA Astrophysics Data System (ADS)
Garcia, V. M.; Garcia, C. A.; Signorini, S.; McClain, C. R.
2005-05-01
Efforts have been made over the past decade to study bio-optical properties of seawater in the Southwestern Atlantic for mapping chlorophyll concentration from space. Coastal regions deserve a greater attention due to the optical complexity from continental influence. Here we present an attempt to derive reliable bio-optical chlorophyll algorithms in the shelf region 25-40o S and 60-45o W. This area is subject to large optical interference by continental runoffs from La Plata River and Patos Lagoon. Spectral upwelling radiance and surface chlorophyll concentration data have been collected in the past years and have been used to generate a regional version of the NASA's OC2v4 model. The regional 2-band algorithm (termed OC2-LP), reduces chlorophyll positive bias to 11% as compared to the global SeaWiFS OC4v4 algorithm (bias = 27%). However, OC2-LP remains with an overall inaccuracy of over 40% in chlorophyll concentration, as calculated by the absolute percentage difference between in-situ and model-derived values. In-situ chlorophyll data from two cruises to the study region (La Plata I - winter of 2003 and La Plata II - summer of 2004) have been used to test the accuracy of the derived algorithm as well as the global version. A marked seasonal difference was found, where both OC4v4 and OC2-LP overestimate chlorophyll in summer at a higher magnitude than in the winter. These results indicate the need for other approaches rather than use of empirical band-ratio models in coastal waters of this region.
A novel metaheuristic for continuous optimization problems: Virus optimization algorithm
NASA Astrophysics Data System (ADS)
Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue
2016-01-01
A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
Cluster-Based Multipolling Sequencing Algorithm for Collecting RFID Data in Wireless LANs
NASA Astrophysics Data System (ADS)
Choi, Woo-Yong; Chatterjee, Mainak
2015-03-01
With the growing use of RFID (Radio Frequency Identification), it is becoming important to devise ways to read RFID tags in real time. Access points (APs) of IEEE 802.11-based wireless Local Area Networks (LANs) are being integrated with RFID networks that can efficiently collect real-time RFID data. Several schemes, such as multipolling methods based on the dynamic search algorithm and random sequencing, have been proposed. However, as the number of RFID readers associated with an AP increases, it becomes difficult for the dynamic search algorithm to derive the multipolling sequence in real time. Though multipolling methods can eliminate the polling overhead, we still need to enhance the performance of the multipolling methods based on random sequencing. To that extent, we propose a real-time cluster-based multipolling sequencing algorithm that drastically eliminates more than 90% of the polling overhead, particularly so when the dynamic search algorithm fails to derive the multipolling sequence in real time.
Region growing using superpixels with learned shape prior
NASA Astrophysics Data System (ADS)
Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro
2017-11-01
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.
Incremental social learning in particle swarms.
de Oca, Marco A Montes; Stutzle, Thomas; Van den Enden, Ken; Dorigo, Marco
2011-04-01
Incremental social learning (ISL) was proposed as a way to improve the scalability of systems composed of multiple learning agents. In this paper, we show that ISL can be very useful to improve the performance of population-based optimization algorithms. Our study focuses on two particle swarm optimization (PSO) algorithms: a) the incremental particle swarm optimizer (IPSO), which is a PSO algorithm with a growing population size in which the initial position of new particles is biased toward the best-so-far solution, and b) the incremental particle swarm optimizer with local search (IPSOLS), in which solutions are further improved through a local search procedure. We first derive analytically the probability density function induced by the proposed initialization rule applied to new particles. Then, we compare the performance of IPSO and IPSOLS on a set of benchmark functions with that of other PSO algorithms (with and without local search) and a random restart local search algorithm. Finally, we measure the benefits of using incremental social learning on PSO algorithms by running IPSO and IPSOLS on problems with different fitness distance correlations.
Alternaria leaf spot in Michigan and fungicide sensitivity issues
USDA-ARS?s Scientific Manuscript database
Since 2010 there has been an increase in identification of Alternaria leaf spot on sugar beet in Michigan and other growing regions in the US and Canada. In 2016, the disease was severe enough to cause economic losses in the Michigan growing region. Michigan isolates from sugar beet were examined ...
Growing an Emerging Research University
ERIC Educational Resources Information Center
Birx, Donald L.; Anderson-Fletcher, Elizabeth; Whitney, Elizabeth
2013-01-01
The emerging research college or university is one of the most formidable resources a region has to reinvent and grow its economy. This paper is the first of two that outlines a process of building research universities that enhance regional technology development and facilitate flexible networks of collaboration and resource sharing. Although the…
Matrimony vine and potato psyllid in the Pacific Northwest: a worrisome marriage?
USDA-ARS?s Scientific Manuscript database
Managing zebra chip disease in the potato growing regions of Washington, Oregon, and Idaho is complicated by confusion about the source of the insect vector (potato psyllid) as it colonizes potato fields in these growing regions. Not knowing where the psyllid is before arriving in Washington potato...
Shahbeig, Saleh; Pourghassem, Hossein
2013-01-01
Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.
ChIP-PaM: an algorithm to identify protein-DNA interaction using ChIP-Seq data.
Wu, Song; Wang, Jianmin; Zhao, Wei; Pounds, Stanley; Cheng, Cheng
2010-06-03
ChIP-Seq is a powerful tool for identifying the interaction between genomic regulators and their bound DNAs, especially for locating transcription factor binding sites. However, high cost and high rate of false discovery of transcription factor binding sites identified from ChIP-Seq data significantly limit its application. Here we report a new algorithm, ChIP-PaM, for identifying transcription factor target regions in ChIP-Seq datasets. This algorithm makes full use of a protein-DNA binding pattern by capitalizing on three lines of evidence: 1) the tag count modelling at the peak position, 2) pattern matching of a specific tag count distribution, and 3) motif searching along the genome. A novel data-based two-step eFDR procedure is proposed to integrate the three lines of evidence to determine significantly enriched regions. Our algorithm requires no technical controls and efficiently discriminates falsely enriched regions from regions enriched by true transcription factor (TF) binding on the basis of ChIP-Seq data only. An analysis of real genomic data is presented to demonstrate our method. In a comparison with other existing methods, we found that our algorithm provides more accurate binding site discovery while maintaining comparable statistical power.
Krohn, Thomas; Birmes, Anita; Winz, Oliver H; Drude, Natascha I; Mottaghy, Felix M; Behrendt, Florian F; Verburg, Frederik A
2017-04-01
To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on [ 68 Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one [ 68 Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of [ 68 Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information.
Prakash Nepal; Peter J. Ince; Kenneth E. Skog; Sun J. Chang
2012-01-01
This paper describes a set of empirical net forest growth models based on forest growing-stock density relationships for three U.S. regions (North, South, and West) and two species groups (softwoods and hardwoods) at the regional aggregate level. The growth models accurately predict historical U.S. timber inventory trends when we incorporate historical timber harvests...
Jain, Rajat; Omar, Mohamed; Chaparala, Hemant; Kahn, Adam; Li, Jianbo; Kahn, Leonard; Sivalingam, Sri
2018-04-23
To compare the accuracy and reliability of stone volume estimated by ellipsoid formula (EFv) and CT-based algorithm (CTv) to true volume (TV) by water displacement in an in vitro model. Ninety stone phantoms were created using clay (0.5-40 cm 3 , 814 HU ±91) and scanned with CT. For each stone, TV was measured by water displacement, CTv was calculated by the region-growing algorithm in the CT-based software AGFA IMPAX Volume Viewer, and EFv was calculated by the standard formula π × L × W × H × 0.167. All measurements were repeated thrice, and concordance correlation coefficient (CCC) was calculated for the whole group, as well as subgroups based on volume (<1.5 cm 3 , 1.5-6 cm 3 , and >6 cm 3 ). Mean TV, CTv, and EFv were 6.42 cm 3 ± 6.57 (range: 0.5-39.37 cm 3 ), 6.24 cm 3 ± 6.15 (0.48-36.1 cm 3 ), and 8.98 cm 3 ± 9.96 (0.49-47.05 cm 3 ), respectively. When comparing TV to CTv, CCC was 0.99 (95% confidence interval [CI]: 0.99-0.995), indicating excellent agreement, although TV was slightly underestimated at larger volumes. When comparing TV to EFv, CCC was 0.82 (95% CI: 0.78-0.86), indicating poor agreement. EFv tended to overestimate the TV, especially as stone volume increased beyond 1.5 cm 3 , and there was a significant spread between trials. An automated CT-based algorithm more accurately and reliably estimates stone volume than does the ellipsoid formula. While further research is necessary to validate stone volume as a surrogate for stone burden, CT-based algorithmic volume measurement of urinary stones is a promising technology.
SU-F-J-115: Target Volume and Artifact Evaluation of a New Device-Less 4D CT Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, R; Pan, T
2016-06-15
Purpose: 4DCT is often used in radiation therapy treatment planning to define the extent of motion of the visible tumor (IGTV). Recent available software allows 4DCT images to be created without the use of an external motion surrogate. This study aims to compare this device-less algorithm to a standard device-driven technique (RPM) in regards to artifacts and the creation of treatment volumes. Methods: 34 lung cancer patients who had previously received a cine 4DCT scan on a GE scanner with an RPM determined respiratory signal were selected. Cine images were sorted into 10 phases based on both the RPM signalmore » and the device-less algorithm. Contours were created on standard and device-less maximum intensity projection (MIP) images using a region growing algorithm and manual adjustment to remove other structures. Variations in measurements due to intra-observer differences in contouring were assessed by repeating a subset of 6 patients 2 additional times. Artifacts in each phase image were assessed using normalized cross correlation at each bed position transition. A score between +1 (artifacts “better” in all phases for device-less) and −1 (RPM similarly better) was assigned for each patient based on these results. Results: Device-less IGTV contours were 2.1 ± 1.0% smaller than standard IGTV contours (not significant, p = 0.15). The Dice similarity coefficient (DSC) was 0.950 ± 0.006 indicating good similarity between the contours. Intra-observer variation resulted in standard deviations of 1.2 percentage points in percent volume difference and 0.005 in DSC measurements. Only two patients had improved artifacts with RPM, and the average artifact score (0.40) was significantly greater than zero. Conclusion: Device-less 4DCT can be used in place of the standard method for target definition due to no observed difference between standard and device-less IGTVs. Phase image artifacts were significantly reduced with the device-less method.« less
Follow-up segmentation of lung tumors in PET and CT data
NASA Astrophysics Data System (ADS)
Opfer, Roland; Kabus, Sven; Schneider, Torben; Carlsen, Ingwer C.; Renisch, Steffen; Sabczynski, Jörg
2009-02-01
Early response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. We have developed algorithms which allow the user to track both tumor volume and standardized uptake value (SUV) measurements during the therapy from series of CT and PET images, respectively. To prepare for tumor volume estimation we have developed a new technique for a fast, flexible, and intuitive 3D definition of meshes. This initial surface is then automatically adapted by means of a model-based segmentation algorithm and propagated to each follow-up scan. If necessary, manual corrections can be added by the user. To determine SUV measurements a prioritized region growing algorithm is employed. For an improved workflow all algorithms are embedded in a PET/CT therapy monitoring software suite giving the clinician a unified and immediate access to all data sets. Whenever the user clicks on a tumor in a base-line scan, the courses of segmented tumor volumes and SUV measurements are automatically identified and displayed to the user as a graph plot. According to each course, the therapy progress can be classified as complete or partial response or as progressive or stable disease. We have tested our methods with series of PET/CT data from 9 lung cancer patients acquired at Princess Margaret Hospital in Toronto. Each patient underwent three PET/CT scans during a radiation therapy. Our results indicate that a combination of mean metabolic activity in the tumor with the PET-based tumor volume can lead to an earlier response detection than a purely volume based (CT diameter) or purely functional based (e.g. SUV max or SUV mean) response measures. The new software seems applicable for easy, faster, and reproducible quantification to routinely monitor tumor therapy.
Path Planning Algorithms for the Adaptive Sensor Fleet
NASA Technical Reports Server (NTRS)
Stoneking, Eric; Hosler, Jeff
2005-01-01
The Adaptive Sensor Fleet (ASF) is a general purpose fleet management and planning system being developed by NASA in coordination with NOAA. The current mission of ASF is to provide the capability for autonomous cooperative survey and sampling of dynamic oceanographic phenomena such as current systems and algae blooms. Each ASF vessel is a software model that represents a real world platform that carries a variety of sensors. The OASIS platform will provide the first physical vessel, outfitted with the systems and payloads necessary to execute the oceanographic observations described in this paper. The ASF architecture is being designed for extensibility to accommodate heterogenous fleet elements, and is not limited to using the OASIS platform to acquire data. This paper describes the path planning algorithms developed for the acquisition phase of a typical ASF task. Given a polygonal target region to be surveyed, the region is subdivided according to the number of vessels in the fleet. The subdivision algorithm seeks a solution in which all subregions have equal area and minimum mean radius. Once the subregions are defined, a dynamic programming method is used to find a minimum-time path for each vessel from its initial position to its assigned region. This path plan includes the effects of water currents as well as avoidance of known obstacles. A fleet-level planning algorithm then shuffles the individual vessel assignments to find the overall solution which puts all vessels in their assigned regions in the minimum time. This shuffle algorithm may be described as a process of elimination on the sorted list of permutations of a cost matrix. All these path planning algorithms are facilitated by discretizing the region of interest onto a hexagonal tiling.
AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.
Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld
2016-08-01
There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Rainfall estimation for real time flood monitoring using geostationary meteorological satellite data
NASA Astrophysics Data System (ADS)
Veerakachen, Watcharee; Raksapatcharawong, Mongkol
2015-09-01
Rainfall estimation by geostationary meteorological satellite data provides good spatial and temporal resolutions. This is advantageous for real time flood monitoring and warning systems. However, a rainfall estimation algorithm developed in one region needs to be adjusted for another climatic region. This work proposes computationally-efficient rainfall estimation algorithms based on an Infrared Threshold Rainfall (ITR) method calibrated with regional ground truth. Hourly rain gauge data collected from 70 stations around the Chao-Phraya river basin were used for calibration and validation of the algorithms. The algorithm inputs were derived from FY-2E satellite observations consisting of infrared and water vapor imagery. The results were compared with the Global Satellite Mapping of Precipitation (GSMaP) near real time product (GSMaP_NRT) using the probability of detection (POD), root mean square error (RMSE) and linear correlation coefficient (CC) as performance indices. Comparison with the GSMaP_NRT product for real time monitoring purpose shows that hourly rain estimates from the proposed algorithm with the error adjustment technique (ITR_EA) offers higher POD and approximately the same RMSE and CC with less data latency.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).
Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen
2015-10-01
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B.; Ayache, Nicholas; Buendia, Patricia; Collins, D. Louis; Cordier, Nicolas; Corso, Jason J.; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R.; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M.; Jena, Raj; John, Nigel M.; Konukoglu, Ender; Lashkari, Danial; Mariz, José António; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J.; Raviv, Tammy Riklin; Reza, Syed M. S.; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A.; Sousa, Nuno; Subbanna, Nagesh K.; Szekely, Gabor; Taylor, Thomas J.; Thomas, Owen M.; Tustison, Nicholas J.; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen
2016-01-01
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. PMID:25494501
A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes
Wang, Jianqiang; Sun, Xiaoyan; Guo, Junbin
2013-01-01
The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS). This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR) of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2017-04-01
With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.
Nurturing a growing field: Computers & Geosciences
NASA Astrophysics Data System (ADS)
Mariethoz, Gregoire; Pebesma, Edzer
2017-10-01
Computational issues are becoming increasingly critical for virtually all fields of geoscience. This includes the development of improved algorithms and models, strategies for implementing high-performance computing, or the management and visualization of the large datasets provided by an ever-growing number of environmental sensors. Such issues are central to scientific fields as diverse as geological modeling, Earth observation, geophysics or climatology, to name just a few. Related computational advances, across a range of geoscience disciplines, are the core focus of Computers & Geosciences, which is thus a truly multidisciplinary journal.
NASA Astrophysics Data System (ADS)
Ci, Hui; Zhang, Qiang; Singh, Vijay P.; Xiao, Mingzhong; Liu, Lin
2016-08-01
Variations of frost days and growing season length (GSL) have been drawing increasing attention due to their impact on agriculture. The Xinjiang region in China is climatically an arid region and plays an important role in agriculture development. In this study, the GSL and frost events are analyzed in both space and time, based on the daily minimum, mean and maximum air surface temperature data covering a period of 1961-2010. Results indicate that: (1) a significant lengthening of GSL is detected during 1961-2010 in Xinjiang, China. The increasing rate of GSL over Xinjiang is about 2.5 days per decade. Besides, the starting time of growing season is 0.7 days earlier per decade and the ending time is 1.6 days later per decade. Generally, GSL in southern Xinjiang has larger increasing magnitude when compared to other regions of Xinjiang; (2) longer GSL and larger changing magnitude of growing season start (GSS), growing season end (GSE) and GSL in southern Xinjiang implies higher sensitivity of the growing season response to climate warming. Besides, GSL is in close relation with latitude, and higher latitude usually corresponds to later start and earlier end of growing season, and hence shorter GSL. In general, a northward increase of 1° latitude triggers an 8-day delay of the starting time of growing season, 6-day advance of the ending time of growing season, and thus the GSL is 14 days shorter; (3) GSL under different rates can reflect light and heat resources over Xinjiang. The GSL related to 80 % guarantee rate is 5-14 days shorter than the long-term annual mean GSL; (4) Lengthening of GSL has the potential to increase agricultural production. However, negative influences by climate warming, such as enhanced evapotranspiration, increasing weeds, insects, and pathogen-mediated plant diseases, should also be considered in planning, management and development of agriculture in Xinjiang.
Interactive semiautomatic contour delineation using statistical conditional random fields framework.
Hu, Yu-Chi; Grossberg, Michael D; Wu, Abraham; Riaz, Nadeem; Perez, Carmen; Mageras, Gig S
2012-07-01
Contouring a normal anatomical structure during radiation treatment planning requires significant time and effort. The authors present a fast and accurate semiautomatic contour delineation method to reduce the time and effort required of expert users. Following an initial segmentation on one CT slice, the user marks the target organ and nontarget pixels with a few simple brush strokes. The algorithm calculates statistics from this information that, in turn, determines the parameters of an energy function containing both boundary and regional components. The method uses a conditional random field graphical model to define the energy function to be minimized for obtaining an estimated optimal segmentation, and a graph partition algorithm to efficiently solve the energy function minimization. Organ boundary statistics are estimated from the segmentation and propagated to subsequent images; regional statistics are estimated from the simple brush strokes that are either propagated or redrawn as needed on subsequent images. This greatly reduces the user input needed and speeds up segmentations. The proposed method can be further accelerated with graph-based interpolation of alternating slices in place of user-guided segmentation. CT images from phantom and patients were used to evaluate this method. The authors determined the sensitivity and specificity of organ segmentations using physician-drawn contours as ground truth, as well as the predicted-to-ground truth surface distances. Finally, three physicians evaluated the contours for subjective acceptability. Interobserver and intraobserver analysis was also performed and Bland-Altman plots were used to evaluate agreement. Liver and kidney segmentations in patient volumetric CT images show that boundary samples provided on a single CT slice can be reused through the entire 3D stack of images to obtain accurate segmentation. In liver, our method has better sensitivity and specificity (0.925 and 0.995) than region growing (0.897 and 0.995) and level set methods (0.912 and 0.985) as well as shorter mean predicted-to-ground truth distance (2.13 mm) compared to regional growing (4.58 mm) and level set methods (8.55 mm and 4.74 mm). Similar results are observed in kidney segmentation. Physician evaluation of ten liver cases showed that 83% of contours did not need any modification, while 6% of contours needed modifications as assessed by two or more evaluators. In interobserver and intraobserver analysis, Bland-Altman plots showed our method to have better repeatability than the manual method while the delineation time was 15% faster on average. Our method achieves high accuracy in liver and kidney segmentation and considerably reduces the time and labor required for contour delineation. Since it extracts purely statistical information from the samples interactively specified by expert users, the method avoids heuristic assumptions commonly used by other methods. In addition, the method can be expanded to 3D directly without modification because the underlying graphical framework and graph partition optimization method fit naturally with the image grid structure.
GC13I-0857: Designing a Frost Forecasting Service for Small Scale Tea Farmers in East Africa
NASA Technical Reports Server (NTRS)
Adams, Emily C.; Wanjohi, James Nyaga; Ellenburg, Walter Lee; Limaye, Ashutosh S.; Mugo, Robinson M.; Flores Cordova, Africa Ixmucane; Irwin, Daniel; Case, Jonathan; Malaso, Susan; Sedah, Absae
2017-01-01
Kenya is the third largest tea exporter in the world, producing 10% of the world's black tea. Sixty percent of this production occurs largely by small scale tea holders, with an average farm size of 1.04 acres, and an annual net income of $1,075. According to a recent evaluation, a typical frost event in the tea growing region causes about $200 dollars in losses which can be catastrophic for a small holder farm. A 72-hour frost forecast would provide these small-scale tea farmers with enough notice to reduce losses by approximately 80 USD annually. With this knowledge, SERVIR, a joint NASA-USAID initiative that brings Earth observations for improved decision making in developing countries, sought to design a frost monitoring and forecasting service that would provide farmers with enough lead time to react to and protect against a forecasted frost occurrence on their farm. SERVIR Eastern and Southern Africa, through its implementing partner, the Regional Centre for Mapping of Resources for Development (RCMRD), designed a service that included multiple stakeholder engagement events whereby stakeholders from the tea industry value chain were invited to share their experiences so that the exact needs and flow of information could be identified. This unique event allowed enabled the design of a service that fit the specifications of the stakeholders. The monitoring service component uses the MODIS Land Surface Temperature product to identify frost occurrences in near-real time. The prediction component, currently under testing, uses the 2-m air temperature, relative humidity, and 10-m wind speed from a series of high-resolution Weather Research and Forecasting (WRF) numerical weather prediction model runs over eastern Kenya as inputs into a frost prediction algorithm. Accuracy and sensitivity of the algorithm is being assessed with observations collected from the farmers using a smart phone app developed specifically to report frost occurrences, and from data shared through our partner network developed at the stakeholder engagement meeting. This presentation will illustrate the efficacy of our frost forecasting algorithm, and a way forward for incorporating these forecasts in a meaningful way to the key decision makers - the small-scale farmers of East Africa.
Designing a Frost Forecasting Service for Small Scale Tea Farmers in East Africa
NASA Astrophysics Data System (ADS)
Adams, E. C.; Nyaga, J. W.; Ellenburg, W. L.; Limaye, A. S.; Mugo, R. M.; Flores Cordova, A. I.; Irwin, D.; Case, J.; Malaso, S.; Sedah, A.
2017-12-01
Kenya is the third largest tea exporter in the world, producing 10% of the world's black tea. Sixty percent of this production occurs largely by small scale tea holders, with an average farm size of 1.04 acres, and an annual net income of 1,075. According to a recent evaluation, a typical frost event in the tea growing region causes about 200 dollars in losses which can be catastrophic for a small holder farm. A 72-hour frost forecast would provide these small-scale tea farmers with enough notice to reduce losses by approximately $80 annually. With this knowledge, SERVIR, a joint NASA-USAID initiative that brings Earth observations for improved decision making in developing countries, sought to design a frost monitoring and forecasting service that would provide farmers with enough lead time to react to and protect against a forecasted frost occurrence on their farm. SERVIR Eastern and Southern Africa, through its implementing partner, the Regional Centre for Mapping of Resources for Development (RCMRD), designed a service that included multiple stakeholder engagement events whereby stakeholders from the tea industry value chain were invited to share their experiences so that the exact needs and flow of information could be identified. This unique event allowed enabled the design of a service that fit the specifications of the stakeholders. The monitoring service component uses the MODIS Land Surface Temperature product to identify frost occurrences in near-real time. The prediction component, currently under testing, uses the 2-m air temperature, relative humidity, and 10-m wind speed from a series of high-resolution Weather Research and Forecasting (WRF) numerical weather prediction model runs over eastern Kenya as inputs into a frost prediction algorithm. Accuracy and sensitivity of the algorithm is being assessed with observations collected from the farmers using a smart phone app developed specifically to report frost occurrences, and from data shared through our partner network developed at the stakeholder engagement meeting. This presentation will illustrate the efficacy of our frost forecasting algorithm, and a way forward for incorporating these forecasts in a meaningful way to the key decision makers - the small-scale farmers of East Africa.
Auzias, G; Brun, L; Deruelle, C; Coulon, O
2015-05-01
Recent interest has been growing concerning points of maximum depth within folds, the sulcal pits, that can be used as reliable cortical landmarks. These remarkable points on the cortical surface are defined algorithmically as the outcome of an automatic extraction procedure. The influence of several crucial parameters of the reference technique (Im et al., 2010) has not been evaluated extensively, and no optimization procedure has been proposed so far. Designing an appropriate optimization framework for these parameters is mandatory to guarantee the reproducibility of results across studies and to ensure the feasibility of sulcal pit extraction and analysis on large cohorts. In this work, we propose a framework specifically dedicated to the optimization of the parameters of the method. This optimization framework relies on new measures for better quantifying the reproducibility of the number of sulcal pits per region across individuals, in line with the assumptions of one-to-one correspondence of sulcal roots across individuals which is an explicit aspect of the sulcal roots model (Régis et al., 2005). Our procedure benefits from a combination of improvements, including the use of a convenient sulcal depth estimation and is methodologically sound. Our experiments on two different groups of individuals, with a total of 137 subjects, show an increased reliability across subjects in deeper sulcal pits, as compared to the previous approach, and cover the entire cortical surface, including shallower and more variable folds that were not considered before. The effectiveness of our method ensures the feasibility of a systematic study of sulcal pits on large cohorts. On top of these methodological advances, we quantify the relationship between the reproducibility of the number of sulcal pits per region across individuals and their respective depth and demonstrate the relatively high reproducibility of several pits corresponding to shallower folds. Finally, we report new results regarding the local pit asymmetry, providing evidence that the algorithmic and conceptual approach defended here may contribute to better understanding of the key role of sulcal pits in neuroanatomy. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Boruta, N.
1977-01-01
The question of whether a perturbed photospheric area can grow into a region of reduced temperature resembling a sunspot is investigated by considering whether instabilities exist that can lead to a growing temperature change and corresponding magnetic-field concentration in some region of the photosphere. After showing that Alfven cooling can lead to these instabilities, the effect of a heat sink on the temperature development of a perturbed portion of the photosphere is studied. A simple form of Alfven-wave cooling is postulated, and computations are performed to determine whether growing modes exist for physically relevant boundary conditions. The results indicate that simple inhibition of convection does not give growing modes, but Alfven-wave production can result in cooling that leads to growing field concentration. It is concluded that since growing instabilities can occur with strong enough cooling, it is quite possible that energy loss through Alfven waves gives rise to a self-generating temperature change and sunspot formation.
An introduction to quantum machine learning
NASA Astrophysics Data System (ADS)
Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco
2015-04-01
Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.
Distributed Coordination of Energy Storage with Distributed Generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Stoorvogel, Antonie A.
2016-07-18
With a growing emphasis on energy efficiency and system flexibility, a great effort has been made recently in developing distributed energy resources (DER), including distributed generators and energy storage systems. This paper first formulates an optimal coordination problem considering constraints at both system and device levels, including power balance constraint, generator output limits, storage energy and power capacity and charging/discharging efficiencies. An algorithm is then proposed to dynamically and automatically coordinate DERs in a distributed manner. With the proposed algorithm, the agent at each DER only maintains a local incremental cost and updates it through information exchange with a fewmore » neighbors, without relying on any central decision maker. Simulation results are used to illustrate and validate the proposed algorithm.« less
A Formal Algorithm for Routing Traces on a Printed Circuit Board
NASA Technical Reports Server (NTRS)
Hedgley, David R., Jr.
1996-01-01
This paper addresses the classical problem of printed circuit board routing: that is, the problem of automatic routing by a computer other than by brute force that causes the execution time to grow exponentially as a function of the complexity. Most of the present solutions are either inexpensive but not efficient and fast, or efficient and fast but very costly. Many solutions are proprietary, so not much is written or known about the actual algorithms upon which these solutions are based. This paper presents a formal algorithm for routing traces on a print- ed circuit board. The solution presented is very fast and efficient and for the first time speaks to the question eloquently by way of symbolic statements.
Simulated quantum computation of molecular energies.
Aspuru-Guzik, Alán; Dutoi, Anthony D; Love, Peter J; Head-Gordon, Martin
2005-09-09
The calculation time for the energy of atoms and molecules scales exponentially with system size on a classical computer but polynomially using quantum algorithms. We demonstrate that such algorithms can be applied to problems of chemical interest using modest numbers of quantum bits. Calculations of the water and lithium hydride molecular ground-state energies have been carried out on a quantum computer simulator using a recursive phase-estimation algorithm. The recursive algorithm reduces the number of quantum bits required for the readout register from about 20 to 4. Mappings of the molecular wave function to the quantum bits are described. An adiabatic method for the preparation of a good approximate ground-state wave function is described and demonstrated for a stretched hydrogen molecule. The number of quantum bits required scales linearly with the number of basis functions, and the number of gates required grows polynomially with the number of quantum bits.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
NASA Astrophysics Data System (ADS)
Wasserman, Richard Marc
The radiation therapy treatment planning (RTTP) process may be subdivided into three planning stages: gross tumor delineation, clinical target delineation, and modality dependent target definition. The research presented will focus on the first two planning tasks. A gross tumor target delineation methodology is proposed which focuses on the integration of MRI, CT, and PET imaging data towards the generation of a mathematically optimal tumor boundary. The solution to this problem is formulated within a framework integrating concepts from the fields of deformable modelling, region growing, fuzzy logic, and data fusion. The resulting fuzzy fusion algorithm can integrate both edge and region information from multiple medical modalities to delineate optimal regions of pathological tissue content. The subclinical boundaries of an infiltrating neoplasm cannot be determined explicitly via traditional imaging methods and are often defined to extend a fixed distance from the gross tumor boundary. In order to improve the clinical target definition process an estimation technique is proposed via which tumor growth may be modelled and subclinical growth predicted. An in vivo, macroscopic primary brain tumor growth model is presented, which may be fit to each patient undergoing treatment, allowing for the prediction of future growth and consequently the ability to estimate subclinical local invasion. Additionally, the patient specific in vivo tumor model will be of significant utility in multiple diagnostic clinical applications.
Graphics Processing Unit Assisted Thermographic Compositing
NASA Technical Reports Server (NTRS)
Ragasa, Scott; McDougal, Matthew; Russell, Sam
2012-01-01
Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.
Reply & Supply: Efficient crowdsourcing when workers do more than answer questions
McAndrew, Thomas C.; Guseva, Elizaveta A.
2017-01-01
Crowdsourcing works by distributing many small tasks to large numbers of workers, yet the true potential of crowdsourcing lies in workers doing more than performing simple tasks—they can apply their experience and creativity to provide new and unexpected information to the crowdsourcer. One such case is when workers not only answer a crowdsourcer’s questions but also contribute new questions for subsequent crowd analysis, leading to a growing set of questions. This growth creates an inherent bias for early questions since a question introduced earlier by a worker can be answered by more subsequent workers than a question introduced later. Here we study how to perform efficient crowdsourcing with such growing question sets. By modeling question sets as networks of interrelated questions, we introduce algorithms to help curtail the growth bias by efficiently distributing workers between exploring new questions and addressing current questions. Experiments and simulations demonstrate that these algorithms can efficiently explore an unbounded set of questions without losing confidence in crowd answers. PMID:28806413
Rashno, Abdolreza; Koozekanani, Dara D; Drayna, Paul M; Nazari, Behzad; Sadri, Saeed; Rabbani, Hossein; Parhi, Keshab K
2018-05-01
This paper presents a fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema. The OCT image is segmented using a novel neutrosophic transformation and a graph-based shortest path method. In neutrosophic domain, an image is transformed into three sets: (true), (indeterminate) that represents noise, and (false). This paper makes four key contributions. First, a new method is introduced to compute the indeterminacy set , and a new -correction operation is introduced to compute the set in neutrosophic domain. Second, a graph shortest-path method is applied in neutrosophic domain to segment the inner limiting membrane and the retinal pigment epithelium as regions of interest (ROI) and outer plexiform layer and inner segment myeloid as middle layers using a novel definition of the edge weights . Third, a new cost function for cluster-based fluid/cyst segmentation in ROI is presented which also includes a novel approach in estimating the number of clusters in an automated manner. Fourth, the final fluid regions are achieved by ignoring very small regions and the regions between middle layers. The proposed method is evaluated using two publicly available datasets: Duke, Optima, and a third local dataset from the UMN clinic which is available online. The proposed algorithm outperforms the previously proposed Duke algorithm by 8% with respect to the dice coefficient and by 5% with respect to precision on the Duke dataset, while achieving about the same sensitivity. Also, the proposed algorithm outperforms a prior method for Optima dataset by 6%, 22%, and 23% with respect to the dice coefficient, sensitivity, and precision, respectively. Finally, the proposed algorithm also achieves sensitivity of 67.3%, 88.8%, and 76.7%, for the Duke, Optima, and the university of minnesota (UMN) datasets, respectively.
Using Landsat digital data to detect moisture stress in corn-soybean growing regions
NASA Technical Reports Server (NTRS)
Thompson, D. R.; Wehmanen, O. A.
1980-01-01
As a part of a follow-on study to the moisture stress detection effort conducted in the Large Area Crop Inventory Experiment (LACIE), a technique utilizing transformed Landsat digital data was evaluated for detecting moisture stress in humid growing regions using sample segments from Iowa, Illinois, and Indiana. At known growth stages of corn and soybeans, segments were classified as undergoing moisture stress or not undergoing stress. The remote-sensing-based information was compared to a weekly ground-based index (Crop Moisture Index). This comparison demonstrated that the remote sensing technique could be used to monitor the growing conditions within a region where corn and soybeans are the major crop.
Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.
Yang, Shengxiang
2008-01-01
In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.
CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.
2011-01-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404
CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W
2012-06-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M
Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less
Balancing Growth, Harvest, and Consumption of Hardwood Resources in the North Central Region
Stephen R. Shifley; Neal Sullivan
2001-01-01
The volume of timber in the North Central Region of the Unites States (IN, IL, IA, MN, WI, MI) has more than doubled since 1950. Annual growth of growing stock on timberland is about 2.3 billion cubic feet (8.5 billion board feet). Removals from growing stock are about 1.1 billion cubic feet (3.4 billion board feet). However, the people who live in the region consume...
A Fast Implementation of the ISOCLUS Algorithm
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline
2003-01-01
Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O(kn) time, where k denotes the current number of centers. Traditional techniques for accelerating nearest neighbor searching involve storing the k centers in a data structure. However, because of the iterative nature of the algorithm, this data structure would need to be rebuilt with each new iteration. Our approach is to store the data points in a kd-tree data structure. The assignment of points to nearest neighbors is carried out by a filtering process, which successively eliminates centers that can not possibly be the nearest neighbor for a given region of space. This algorithm is significantly faster, because large groups of data points can be assigned to their nearest center in a single operation. Preliminary results on a number of real Landsat datasets show that our revised ISOCLUS-like scheme runs about twice as fast.
Spatial patterns of development drive water use
G. M. Sanchez; J. W. Smith; A. Terando; G. Sun; R. K. Meentemeyer
2018-01-01
Water availability is becoming more uncertain as human populations grow, cities expand into rural regions and the climate changes. In this study, we examine the functional relationship between water use and the spatial patterns of developed land across the rapidly growing region of the southeastern United States. We quantified the spatial pattern of developed land...
Deficit irrigation strategies and their impact on yield and nutritional quality of pomegranate fruit
USDA-ARS?s Scientific Manuscript database
In arid regions of the world, farmers use deficit irrigation (DI) strategies to supply water at levels below full crop evapotranspiration throughout the growing season or at specific phenological stages. In water-sensitive regions, growing crops that are water stress-resistant and tolerant of arid e...
Observation of quantum criticality with ultracold atoms in optical lattices
NASA Astrophysics Data System (ADS)
Zhang, Xibo
As biological problems are becoming more complex and data growing at a rate much faster than that of computer hardware, new and faster algorithms are required. This dissertation investigates computational problems arising in two of the fields: comparative genomics and epigenomics, and employs a variety of computational techniques to address the problems. One fundamental question in the studies of chromosome evolution is whether the rearrangement breakpoints are happening at random positions or along certain hotspots. We investigate the breakpoint reuse phenomenon, and show the analyses that support the more recently proposed fragile breakage model as opposed to the conventional random breakage models for chromosome evolution. The identification of syntenic regions between chromosomes forms the basis for studies of genome architectures, comparative genomics, and evolutionary genomics. The previous synteny block reconstruction algorithms could not be scaled to a large number of mammalian genomes being sequenced; neither did they address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolutionary history of large-scale duplications prevalent in plant genomes. We present a new unified synteny block generation algorithm based on A-Bruijn graph framework that overcomes these shortcomings. In the epigenome sequencing, a sample may contain a mixture of epigenomes and there is a need to resolve the distinct methylation patterns from the mixture. Many sequencing applications, such as haplotype inference for diploid or polyploid genomes, and metagenomic sequencing, share the similar objective: to infer a set of distinct assemblies from reads that are sequenced from a heterogeneous sample and subsequently aligned to a reference genome. We model the problem from both a combinatorial and a statistical angles. First, we describe a theoretical framework. A linear-time algorithm is then given to resolve a minimum number of assemblies that are consistent with all reads, substantially improving on previous algorithms. An efficient algorithm is also described to determine a set of assemblies that is consistent with a maximum subset of the reads, a previously untreated problem. We then prove that allowing nested reads or permitting mismatches between reads and their assemblies renders these problems NP-hard. Second, we describe a mixture model-based approach, and applied the model for the detection of allele-specific methylations.
The Time Series Technique for Aerosol Retrievals over Land from MODIS: Algorithm MAIAC
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Wang, Yujie
2008-01-01
Atmospheric aerosols interact with sun light by scattering and absorbing radiation. By changing irradiance of the Earth surface, modifying cloud fractional cover and microphysical properties and a number of other mechanisms, they affect the energy balance, hydrological cycle, and planetary climate [IPCC, 2007]. In many world regions there is a growing impact of aerosols on air quality and human health. The Earth Observing System [NASA, 1999] initiated high quality global Earth observations and operational aerosol retrievals over land. With the wide swath (2300 km) of MODIS instrument, the MODIS Dark Target algorithm [Kaufman et al., 1997; Remer et al., 2005; Levy et al., 2007] currently complemented with the Deep Blue method [Hsu et al., 2004] provides daily global view of planetary atmospheric aerosol. The MISR algorithm [Martonchik et al., 1998; Diner et al., 2005] makes high quality aerosol retrievals in 300 km swaths covering the globe in 8 days. With MODIS aerosol program being very successful, there are still several unresolved issues in the retrieval algorithms. The current processing is pixel-based and relies on a single-orbit data. Such an approach produces a single measurement for every pixel characterized by two main unknowns, aerosol optical thickness (AOT) and surface reflectance (SR). This lack of information constitutes a fundamental problem of the remote sensing which cannot be resolved without a priori information. For example, MODIS Dark Target algorithm makes spectral assumptions about surface reflectance, whereas the Deep Blue method uses ancillary global database of surface reflectance composed from minimal monthly measurements with Rayleigh correction. Both algorithms use Lambertian surface model. The surface-related assumptions in the aerosol retrievals may affect subsequent atmospheric correction in unintended way. For example, the Dark Target algorithm uses an empirical relationship to predict SR in the Blue (B3) and Red (B1) bands from the 2.1 m channel (B7) for the purpose of aerosol retrieval. Obviously, the subsequent atmospheric correction will produce the same SR in the red and blue bands as predicted, i.e. an empirical function of 2.1. In other words, the spectral, spatial and temporal variability of surface reflectance in the Blue and Red bands appears borrowed from band B7. This may have certain implications for the vegetation and global carbon analysis because the chlorophyll-sensing bands B1, B3 are effectively substituted in terms of variability by band B7, which is sensitive to the plant liquid water. This chapter describes a new recently developed generic aerosol-surface retrieval algorithm for MODIS. The Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm simultaneously retrieves AOT and surface bi-directional reflection factor (BRF) using the time series of MODIS measurements.
Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling
NASA Astrophysics Data System (ADS)
Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan
2018-01-01
In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.
Reconstruction of 3d Models from Point Clouds with Hybrid Representation
NASA Astrophysics Data System (ADS)
Hu, P.; Dong, Z.; Yuan, P.; Liang, F.; Yang, B.
2018-05-01
The three-dimensional (3D) reconstruction of urban buildings from point clouds has long been an active topic in applications related to human activities. However, due to the structures significantly differ in terms of complexity, the task of 3D reconstruction remains a challenging issue especially for the freeform surfaces. In this paper, we present a new reconstruction algorithm which allows the 3D-models of building as a combination of regular structures and irregular surfaces, where the regular structures are parameterized plane primitives and the irregular surfaces are expressed as meshes. The extraction of irregular surfaces starts with an over-segmented method for the unstructured point data, a region growing approach based the adjacent graph of super-voxels is then applied to collapse these super-voxels, and the freeform surfaces can be clustered from the voxels filtered by a thickness threshold. To achieve these regular planar primitives, the remaining voxels with a larger flatness will be further divided into multiscale super-voxels as basic units, and the final segmented planes are enriched and refined in a mutually reinforcing manner under the framework of a global energy optimization. We have implemented the proposed algorithms and mainly tested on two point clouds that differ in point density and urban characteristic, and experimental results on complex building structures illustrated the efficacy of the proposed framework.
Large-scale seismic signal analysis with Hadoop
Addair, T. G.; Dodge, D. A.; Walter, W. R.; ...
2014-02-11
In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. Thismore » was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data.« less
Large-scale seismic signal analysis with Hadoop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Addair, T. G.; Dodge, D. A.; Walter, W. R.
In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. Thismore » was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data.« less
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
The terroir of vineyards - climatic variability in an Austrian wine-growing region
NASA Astrophysics Data System (ADS)
Gerersdorfer, T.
2010-09-01
The description of a terroir is a concept in viticulture that relates the sensory attributes of wine to the environmental conditions in which the grapes grow. Many factors are involved including climate, soil, cultivar, human practices and all these factors interact manifold. The study area of Carnuntum is a small wine-growing region in the eastern part of Austria. It is rich of Roman remains which play a major role in tourism and the marketing strategies of the wines as well. An interdisciplinary study on the environmental characteristics particularly with regard to growing conditions of grapes was started in this region. The study is concerned with the description of the physiogeographic properties of the region and with the investigation of the dominating viticultural functions. Grape-vines depend on climatic conditions to a high extent. Compared to other influencing factors like soil, climate plays a significant role. In the framework of this interdisciplinary project climatic variability within the Carnuntum wine-growing region is investigated. On the one hand microclimatic variations are influenced by soil type and by canopy management. On the other hand the variability is a result of the topoclimate (altitude, aspect and slope) and therefore relief is a major terroir factor. Results of microclimatic measurements and variations are presented with focus on the interpretation of the relationship between relief, structure of the vineyards and the climatic conditions within the course of a full year period.
NASA Technical Reports Server (NTRS)
Ganguly, Sangram
2015-01-01
Plant phenology and maximum photosynthetic state determine spatiotemporal variability of gross primary productivity (GPP) of vegetation. Recent warming induced impacts accelerate shifts of phenology and physiological status over Northern vegetated land. Thus, understanding and quantifying these changes are very important. Here, we investigate 1) how vegetation phenology and physiological status (maximum photosynthesis) are evolved over last three decades and 2) how such components (phenology and physiological status) contribute on inter-annual variation of the GPP during the last three decades. We utilized both long-term remotely sensed (GIMMS (Global Inventory Modeling and Mapping Studies), NDVI3g (Normalized Difference Vegetation Index 3rd generation) and MODIS (Moderate Resolution Imaging Spectroradiometer)) to extract larger scale phenology metrics (growing season start, end and duration); and productivity (i.e., growing season integrated vegetation index, GSIVI) to answer these questions. For evaluation purpose, we also introduced field-measured phenology and productivity datasets (e.g., FLUXNET) and possible remotely-sensed and modeled metrics at continental and regional scales. From this investigation, we found that onset of the growing season has advanced by 1.61 days per decade and the growing season end has delayed by 0.67 days per decade over the circumpolar region. This asymmetric extension of growing season results in a longer growing-season trend (2.96 days per decade) and widespread increasing vegetation-productivity trend (2.96 GSIVI per decade) over Northern land. However, the regionally-diverged phenology shift and maximum photosynthetic state contribute differently characterized productivity, inter-annual variability and trend. We quantified that about 50 percent, 13 percent and 6.5 percent of Northern land's inter-annual variability are dominantly controlled by the onset of the growing season, the end of the growing season and the maximum photosynthetic state, respectively. Productivity characterization over the other approximately 30 percent region has been driven by these co-dominant drivers. Our study clearly shows that regionally different contribution of phenological and physiological components on characterizing vegetation production over the last three decades.
Solving the infeasible trust-region problem using approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott
2004-07-01
The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less
Is searching full text more effective than searching abstracts?
Lin, Jimmy
2009-01-01
Background With the growing availability of full-text articles online, scientists and other consumers of the life sciences literature now have the ability to go beyond searching bibliographic records (title, abstract, metadata) to directly access full-text content. Motivated by this emerging trend, I posed the following question: is searching full text more effective than searching abstracts? This question is answered by comparing text retrieval algorithms on MEDLINE® abstracts, full-text articles, and spans (paragraphs) within full-text articles using data from the TREC 2007 genomics track evaluation. Two retrieval models are examined: bm25 and the ranking algorithm implemented in the open-source Lucene search engine. Results Experiments show that treating an entire article as an indexing unit does not consistently yield higher effectiveness compared to abstract-only search. However, retrieval based on spans, or paragraphs-sized segments of full-text articles, consistently outperforms abstract-only search. Results suggest that highest overall effectiveness may be achieved by combining evidence from spans and full articles. Conclusion Users searching full text are more likely to find relevant articles than searching only abstracts. This finding affirms the value of full text collections for text retrieval and provides a starting point for future work in exploring algorithms that take advantage of rapidly-growing digital archives. Experimental results also highlight the need to develop distributed text retrieval algorithms, since full-text articles are significantly longer than abstracts and may require the computational resources of multiple machines in a cluster. The MapReduce programming model provides a convenient framework for organizing such computations. PMID:19192280
NASA Astrophysics Data System (ADS)
Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng
2002-03-01
The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.
NASA Astrophysics Data System (ADS)
Mubako, S. T.; Hargrove, W. L.; Heyman, J. M.; Reyes, C. S.
2016-12-01
Urbanization is an area of growing interest in assessing the impact of human activities on water resources in arid regions. Remote sensing techniques provide an opportunity to analyze land cover change over time, and are useful in monitoring areas undergoing rapid urban growth. This case study for the water-scarce Upper Rio Grande River Basin uses a supervised classification algorithm to quantify the rate and evaluate the pattern of urban sprawl. A focus is made on the fast growing El-Paso-Juarez metropolitan area on the US-Mexico border and the City of Las Cruces in New Mexico, areas where environmental challenges and loss of agricultural and native land to urban development are major concerns. Preliminary results show that the land cover is dominantly native with some significant agriculture along the Rio Grande River valley. Urban development across the whole study area expanded from just under 3 percent in 1990, to more than 11 percent in 2015. The urban expansion is occurring mainly around the major urban areas of El Paso, Ciudad Juarez, and Las Cruces, although there is visible growth of smaller urban settlements scattered along the Rio Grande River valley during the same analysis period. The proportion of native land cover fluctuates slightly depending on how much land is under crops each analysis year, but there is a decreasing agricultural land cover trend suggesting that land from this sector is being lost to urban development. This analysis can be useful in planning to protect the environment, preparing for growth in infrastructure such as schools, increased traffic demands, and monitoring availability of resources such as groundwater as the urban population grows.
Benchmarking database performance for genomic data.
Khushi, Matloob
2015-06-01
Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Using model-data fusion to analyze the interannual variability of NEE of an alpine grassland
NASA Astrophysics Data System (ADS)
Scholz, Katharina; Hammerle, Albin; Hiltbrunner, Erika; Wohlfahrt, Georg
2017-04-01
To understand the processes and magnitude of carbon dynamics of the biosphere, modeling approaches are an important tool to analyze carbon budgets from regional to global scale. Here, a simple process-based ecosystem carbon model was used to investigate differences in CO2 fluxes of a high mountain grassland near Furka Pass in the Swiss central Alps at an elevation of about 2400 m a.s.l. during two growing seasons differing in snow melt date. Data on net ecosystem CO2 exchange (NEE) as well as meteorological conditions was available from 20.06.2013 - 08.10.2014 covering two snow free periods. The NEE data indicates that the carbon uptake during the growing season in 2013 was considerably lower than in 2014. To investigate whether the lower carbon uptake in 2013 was mainly due to the short growing season, an effect of biotic response to spring environmental conditions, or the direct effect of the weather conditions during the growing season, a modeling approach was applied. For this purpose, an ecosystem mass balance C model with 13 unknown parameters was constructed based on the DALEC model to represent the major C fluxes among six carbon pools (foliage, roots, necromass, litter, soil organic carbon and a labile pool to support leaf onset in spring) of the grassland ecosystem. Daily gross primary production was estimated by use of a sun/shade big-leaf model of canopy photosynthesis. By calibrating the model with NEE data from individual years, two sets of parameters were retrieved which were then used to run the model under environmental conditions of the same as well as the other year. The parameter estimation was done using DREAM, an algorithm for statistical inference of parameters using Bayesian statistics. In order to account for non-normality, heteroscedasticity and correlation of model residuals, a common problem in ecological modeling, a generalized likelihood function was applied. The results indicate that the late growing season start in 2013 led to a slower structural development of the grassland in the beginning. Nevertheless, maximum daily NEE values in 2013 were comparable to those in 2014. Moreover, the analysis showed that there was no direct effect of weather conditions during the snow free period. This indicates that the overall lower carbon uptake in 2013 was due to a slow start and the short growing season.
Detection of insect damage in almonds
NASA Astrophysics Data System (ADS)
Kim, Soowon; Schatzki, Thomas F.
1999-01-01
Pinhole insect damage in natural almonds is very difficult to detect on-line. Further, evidence exists relating insect damage to aflatoxin contamination. Hence, for quality and health reasons, methods to detect and remove such damaged nuts are of great importance in this study, we explored the possibility of using x-ray imaging to detect pinhole damage in almonds by insects. X-ray film images of about 2000 almonds and x-ray linescan images of only 522 pinhole damaged almonds were obtained. The pinhole damaged region appeared slightly darker than non-damaged region in x-ray negative images. A machine recognition algorithm was developed to detect these darker regions. The algorithm used the first order and the second order information to identify the damaged region. To reduce the possibility of false positive results due to germ region in high resolution images, germ detection and removal routines were also included. With film images, the algorithm showed approximately an 81 percent correct recognition ratio with only 1 percent false positives whereas line scan images correctly recognized 65 percent of pinholes with about 9 percent false positives. The algorithms was very fast and efficient requiring only minimal computation time. If implemented on line, theoretical throughput of this recognition system would be 66 nuts/second.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veeraraghavan, H; Tyagi, N; Riaz, N
2014-06-01
Purpose: Identification and image-based monitoring of lymph nodes growing due to disease, could be an attractive alternative to prophylactic head and neck irradiation. We evaluated the accuracy of the user-interactive Grow Cut algorithm for volumetric segmentation of radiotherapy relevant lymph nodes from MRI taken weekly during radiotherapy. Method: The algorithm employs user drawn strokes in the image to volumetrically segment multiple structures of interest. We used a 3D T2-wturbo spin echo images with an isotropic resolution of 1 mm3 and FOV of 492×492×300 mm3 of head and neck cancer patients who underwent weekly MR imaging during the course of radiotherapy.more » Various lymph node (LN) levels (N2, N3, N4'5) were individually contoured on the weekly MR images by an expert physician and used as ground truth in our study. The segmentation results were compared with the physician drawn lymph nodes based on DICE similarity score. Results: Three head and neck patients with 6 weekly MR images were evaluated. Two patients had level 2 LN drawn and one patient had level N2, N3 and N4'5 drawn on each MR image. The algorithm took an average of a minute to segment the entire volume (512×512×300 mm3). The algorithm achieved an overall DICE similarity score of 0.78. The time taken for initializing and obtaining the volumetric mask was about 5 mins for cases with only N2 LN and about 15 mins for the case with N2,N3 and N4'5 level nodes. The longer initialization time for the latter case was due to the need for accurate user inputs to separate overlapping portions of the different LN. The standard deviation in segmentation accuracy at different time points was utmost 0.05. Conclusions: Our initial evaluation of the grow cut segmentation shows reasonably accurate and consistent volumetric segmentations of LN with minimal user effort and time.« less
GBA manager: an online tool for querying low-complexity regions in proteins.
Bandyopadhyay, Nirmalya; Kahveci, Tamer
2010-01-01
Abstract We developed GBA Manager, an online software that facilitates the Graph-Based Algorithm (GBA) we proposed in our earlier work. GBA identifies the low-complexity regions (LCR) of protein sequences. GBA exploits a similarity matrix, such as BLOSUM62, to compute the complexity of the subsequences of the input protein sequence. It uses a graph-based algorithm to accurately compute the regions that have low complexities. GBA Manager is a user friendly web-service that enables online querying of protein sequences using GBA. In addition to querying capabilities of the existing GBA algorithm, GBA Manager computes the p-values of the LCR identified. The p-value gives an estimate of the possibility that the region appears by chance. GBA Manager presents the output in three different understandable formats. GBA Manager is freely accessible at http://bioinformatics.cise.ufl.edu/GBA/GBA.htm .
Efficient solutions to the Euler equations for supersonic flow with embedded subsonic regions
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Dwoyer, Douglas L.
1987-01-01
A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two dimensions is described. Convergence of the basic algorithm to the steady state is quadratic for fully supersonic flows and is linear for other flows. This is in contrast to the block alternating direction implicit methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented herein is easily coupled with methods to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, and yields a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing oblique and normal shock waves which confirm the efficiency of the iteration strategy.
NASA Astrophysics Data System (ADS)
He, A.; Quan, C.
2018-04-01
The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.
NASA Astrophysics Data System (ADS)
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity
NASA Astrophysics Data System (ADS)
Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin
2017-07-01
Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.
Comparative Analysis of Aerosol Retrievals from MODIS, OMI and MISR Over Sahara Region
NASA Technical Reports Server (NTRS)
Lyapustin, A.; Wang, Y.; Hsu, C.; Terres, O.; Leptoukh, G.; Kalashnikova, O.; Korkin, S.
2011-01-01
MODIS is a wide field-of-view sensor providing daily global observations of the Earth. Currently, global MODIS aerosol retrievals over land are performed with the main Dark Target algorithm complimented with the Deep Blue (DB) Algorithm over bright deserts. The Dark Target algorithm relies on surface parameterization which relates reflectance in MODIS visible bands with the 2.1 micrometer region, whereas the Deep Blue algorithm uses an ancillary angular distribution model of surface reflectance developed from the time series of clear-sky MODIS observations. Recently, a new Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm has been developed for MODIS. MAIAC uses a time series and an image based processing to perform simultaneous retrievals of aerosol properties and surface bidirectional reflectance. It is a generic algorithm which works over both dark vegetative surfaces and bright deserts and performs retrievals at 1 km resolution. In this work, we will provide a comparative analysis of DB, MAIAC, MISR and OMI aerosol products over bright deserts of northern Africa.
Scalability problems of simple genetic algorithms.
Thierens, D
1999-01-01
Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
Euskirchen, E.S.; McGuire, A.D.; Kicklighter, D.W.; Zhuang, Q.; Clein, Joy S.; Dargaville, R.J.; Dye, D.G.; Kimball, J.S.; McDonald, K.C.; Melillo, J.M.; Romanovsky, V.E.; Smith, N.V.
2006-01-01
In terrestrial high-latitude regions, observations indicate recent changes in snow cover, permafrost, and soil freeze-thaw transitions due to climate change. These modifications may result in temporal shifts in the growing season and the associated rates of terrestrial productivity. Changes in productivity will influence the ability of these ecosystems to sequester atmospheric CO2. We use the terrestrial ecosystem model (TEM), which simulates the soil thermal regime, in addition to terrestrial carbon (C), nitrogen and water dynamics, to explore these issues over the years 1960-2100 in extratropical regions (30-90??N). Our model simulations show decreases in snow cover and permafrost stability from 1960 to 2100. Decreases in snow cover agree well with National Oceanic and Atmospheric Administration satellite observations collected between the years 1972 and 2000, with Pearson rank correlation coefficients between 0.58 and 0.65. Model analyses also indicate a trend towards an earlier thaw date of frozen soils and the onset of the growing season in the spring by approximately 2-4 days from 1988 to 2000. Between 1988 and 2000, satellite records yield a slightly stronger trend in thaw and the onset of the growing season, averaging between 5 and 8 days earlier. In both, the TEM simulations and satellite records, trends in day of freeze in the autumn are weaker, such that overall increases in growing season length are due primarily to earlier thaw. Although regions with the longest snow cover duration displayed the greatest increase in growing season length, these regions maintained smaller increases in productivity and heterotrophic respiration than those regions with shorter duration of snow cover and less of an increase in growing season length. Concurrent with increases in growing season length, we found a reduction in soil C and increases in vegetation C, with greatest losses of soil C occurring in those areas with more vegetation, but simulations also suggest that this trend could reverse in the future. Our results reveal noteworthy changes in snow, permafrost, growing season length, productivity, and net C uptake, indicating that prediction of terrestrial C dynamics from one decade to the next will require that large-scale models adequately take into account the corresponding changes in soil thermal regimes. ?? 2006 Blackwell Publishing Ltd.
Linear single-step image reconstruction in the presence of nonscattering regions.
Dehghani, H; Delpy, D T
2002-06-01
There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.
Linear single-step image reconstruction in the presence of nonscattering regions
NASA Astrophysics Data System (ADS)
Dehghani, H.; Delpy, D. T.
2002-06-01
There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.
NASA Astrophysics Data System (ADS)
Jackson, Christopher Robert
"Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.
NASA Astrophysics Data System (ADS)
Li, Zhong-xiao; Li, Zhen-chun
2016-09-01
The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
NASA Astrophysics Data System (ADS)
Cenci, Luca; Boni, Giorgio; Pulvirenti, Luca; Gabellani, Simone; Gardella, Fabio; Squicciarino, Giuseppe; Pierdicca, Nazzareno; Benedetto, Catia
2016-04-01
In a reservoir, water level monitoring is important for emergency management purposes. This information can be used to estimate the degree of filling of the water body, thus helping decision makers in flood control operations. Furthermore, if assimilated in hydrological models and coupled with rainfall forecasts, this information can be used for flood forecast and early warning. In many cases, water level is not known (e.g. data-scarce environments), or not shared by operators. Remote sensing may allow overcoming these limitations, enabling its estimation. The objective of this work is to present the Shoreline to Height (S2H) algorithm, developed to retrieve the height of the water stored in reservoirs from satellite images. To this aim, some auxiliary data are needed: a DEM and the maximum/minimum height that can be reached by the water. In data-scarce environments, these information can be easily obtained on the Internet (e.g. free, worldwide DEM and design data for artificial reservoirs). S2H was tested with different satellite data, both optical and SAR (Landsat and Cosmo SkyMed®-CSK®) in order to assess the impact of different sensors on the final estimates. The study area was the Place-Moulin Lake (Valle d'Aosta-VdA, Italy), where it is present a monitoring network that can provide reliable ground-truths for validating the algorithm and assessing its accuracy. When the algorithm was developed, it was assumed to be in absence of any "official"-auxiliary data. Therefore, two DEMs (SRTM 1 arc-second and ASTER GDEM) were used to evaluate their performances. The maximum/minimum water height values were found on the website of VdA Region. The S2H is based on three steps: i) satellite data preprocessing (Landsat: atmospheric correction; CSK®: geocoding and speckle filtering); ii) water mask generation (using a thresholding and region growing algorithm) and shoreline extraction; iii) retrieval of the shoreline height according to the reference DEMs (adopting a statistical approach). The algorithm was tested for different water heights and results were compared against ground-truths. Findings showed that the combination CSK®-SRTM provided more reliable results. It was also found that the overall quality of the estimates increases as the water height increases, reaching an accuracy up to some centimetres. This result is particularly interesting for flood control applications, where it is important to be accurate when the reservoir's degree of filling is high. The potentialities of S2H for operational hydrology purposes were tested in a real-case simulation, in which the river discharge's prediction downstream of the dam was needed for flood risk management purposes. The water height value retrieved with S2H was assimilated within a semi-distributed, event-based, hydrological model (DRiFt) by using a simple direct insertion algorithm. DRiFt is usually run in operative way on the reservoir by using ground-truths as input data. The result of the data assimilation experiment was compared with the "real", operative run of the model. Findings showed a high agreement between the two simulations, proving the utility/quality of the S2H algorithm. "Project carried out using CSK® Products, © of the Italian Space Agency (ASI), delivered under a license to use by ASI."
m-BIRCH: an online clustering approach for computer vision applications
NASA Astrophysics Data System (ADS)
Madan, Siddharth K.; Dana, Kristin J.
2015-03-01
We adapt a classic online clustering algorithm called Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), to incrementally cluster large datasets of features commonly used in multimedia and computer vision. We call the adapted version modified-BIRCH (m-BIRCH). The algorithm uses only a fraction of the dataset memory to perform clustering, and updates the clustering decisions when new data comes in. Modifications made in m-BIRCH enable data driven parameter selection and effectively handle varying density regions in the feature space. Data driven parameter selection automatically controls the level of coarseness of the data summarization. Effective handling of varying density regions is necessary to well represent the different density regions in data summarization. We use m-BIRCH to cluster 840K color SIFT descriptors, and 60K outlier corrupted grayscale patches. We use the algorithm to cluster datasets consisting of challenging non-convex clustering patterns. Our implementation of the algorithm provides an useful clustering tool and is made publicly available.
Digital Terrain from a Two-Step Segmentation and Outlier-Based Algorithm
NASA Astrophysics Data System (ADS)
Hingee, Kassel; Caccetta, Peter; Caccetta, Louis; Wu, Xiaoliang; Devereaux, Drew
2016-06-01
We present a novel ground filter for remotely sensed height data. Our filter has two phases: the first phase segments the DSM with a slope threshold and uses gradient direction to identify candidate ground segments; the second phase fits surfaces to the candidate ground points and removes outliers. Digital terrain is obtained by a surface fit to the final set of ground points. We tested the new algorithm on digital surface models (DSMs) for a 9600km2 region around Perth, Australia. This region contains a large mix of land uses (urban, grassland, native forest and plantation forest) and includes both a sandy coastal plain and a hillier region (elevations up to 0.5km). The DSMs are captured annually at 0.2m resolution using aerial stereo photography, resulting in 1.2TB of input data per annum. Overall accuracy of the filter was estimated to be 89.6% and on a small semi-rural subset our algorithm was found to have 40% fewer errors compared to Inpho's Match-T algorithm.
Content-aware dark image enhancement through channel division.
Rivera, Adin Ramirez; Ryu, Byungyong; Chae, Oksam
2012-09-01
The current contrast enhancement algorithms occasionally result in artifacts, overenhancement, and unnatural effects in the processed images. These drawbacks increase for images taken under poor illumination conditions. In this paper, we propose a content-aware algorithm that enhances dark images, sharpens edges, reveals details in textured regions, and preserves the smoothness of flat regions. The algorithm produces an ad hoc transformation for each image, adapting the mapping functions to each image's characteristics to produce the maximum enhancement. We analyze the contrast of the image in the boundary and textured regions, and group the information with common characteristics. These groups model the relations within the image, from which we extract the transformation functions. The results are then adaptively mixed, by considering the human vision system characteristics, to boost the details in the image. Results show that the algorithm can automatically process a wide range of images-e.g., mixed shadow and bright areas, outdoor and indoor lighting, and face images-without introducing artifacts, which is an improvement over many existing methods.
Illinois Innovation Talent Project: Implications for Two-Year Institutions
ERIC Educational Resources Information Center
Tyszko, Jason A.; Sheets, Robert G.
2012-01-01
There is a growing consensus that the United States and its regions, including the Midwest region, will increasingly compete on innovation. This also is widely recognized in the business world. There is also growing consensus that innovation talent--the human talent to drive and support innovation--will be a major key. Despite this consensus,…
Tangled trends for temperate rain forests as temperatures tick up
Noreen Parks; Tara Barrett
2013-01-01
Climate change is altering growing conditions in the temperate rain forest region that extends from northern California to the Gulf of Alaska. Longer, warmer growing seasons are generally increasing the overall potential for forest growth in the region. However, species differ in their ability to adapt to changing conditions. For example, researchers with Pacific...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yakir, D.; Gat, J.; Issar, A.
1994-08-01
The isotopic ratios [sup 13]C/[sup 12]C and [sup 18]O/[sup 16]O of cellulose from tamarix trees which were used by the Roman army as a groundwork of the siege-rampart of Masada (AD 70-73) were compared with ratios measured in present-day tamarix trees growing in the Masada region and in central Israel. The ancient tamarix cellulose is depleted in both [sup 13]C and [sup 18]O compared to cellulose from trees growing in the Masada region today. Similar trends were observed on comparing modern tamarix trees growing in the Negev Desert with those growing in the temperate climate of central Israel. Considering themore » factors that can contribute to the observed changes in isotopic composition, the authors conclude that the ancient trees enjoyed less arid environmental conditions during their growth compared to contemporary trees in this desert region. This report demonstrates the potential in using combined [sup 18]O and [sup 13]C analyses of archeological plant material as independent indication of regional climate change in desert areas (where conventional isotopic analyses, such as in tree rings, are impractical).« less
Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm
NASA Astrophysics Data System (ADS)
Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.
2018-05-01
A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.
A Framework for Understanding Physics Students' Computational Modeling Practices
ERIC Educational Resources Information Center
Lunk, Brandon Robert
2012-01-01
With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content…
Reeves, Anthony P.; Xie, Yiting; Liu, Shuang
2017-01-01
Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037
Blood vessel segmentation in color fundus images based on regional and Hessian features.
Shah, Syed Ayaz Ali; Tang, Tong Boon; Faye, Ibrahima; Laude, Augustinus
2017-08-01
To propose a new algorithm of blood vessel segmentation based on regional and Hessian features for image analysis in retinal abnormality diagnosis. Firstly, color fundus images from the publicly available database DRIVE were converted from RGB to grayscale. To enhance the contrast of the dark objects (blood vessels) against the background, the dot product of the grayscale image with itself was generated. To rectify the variation in contrast, we used a 5 × 5 window filter on each pixel. Based on 5 regional features, 1 intensity feature and 2 Hessian features per scale using 9 scales, we extracted a total of 24 features. A linear minimum squared error (LMSE) classifier was trained to classify each pixel into a vessel or non-vessel pixel. The DRIVE dataset provided 20 training and 20 test color fundus images. The proposed algorithm achieves a sensitivity of 72.05% with 94.79% accuracy. Our proposed algorithm achieved higher accuracy (0.9206) at the peripapillary region, where the ocular manifestations in the microvasculature due to glaucoma, central retinal vein occlusion, etc. are most obvious. This supports the proposed algorithm as a strong candidate for automated vessel segmentation.
Vrahatis, Aristidis G; Rapti, Angeliki; Sioutas, Spyros; Tsakalidis, Athanasios
2017-01-01
In the era of Systems Biology and growing flow of omics experimental data from high throughput techniques, experimentalists are in need of more precise pathway-based tools to unravel the inherent complexity of diseases and biological processes. Subpathway-based approaches are the emerging generation of pathway-based analysis elucidating the biological mechanisms under the perspective of local topologies onto a complex pathway network. Towards this orientation, we developed PerSub, a graph-based algorithm which detects subpathways perturbed by a complex disease. The perturbations are imprinted through differentially expressed and co-expressed subpathways as recorded by RNA-seq experiments. Our novel algorithm is applied on data obtained from a real experimental study and the identified subpathways provide biological evidence for the brain aging.
Learning to forget: continual prediction with LSTM.
Gers, F A; Schmidhuber, J; Cummins, F
2000-10-01
Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.
Quantitative 3D reconstruction of airway and pulmonary vascular trees using HRCT
NASA Astrophysics Data System (ADS)
Wood, Susan A.; Hoford, John D.; Hoffman, Eric A.; Zerhouni, Elias A.; Mitzner, Wayne A.
1993-07-01
Accurate quantitative measurements of airway and vascular dimensions are essential to evaluate function in the normal and diseased lung. In this report, a novel method is described for three-dimensional extraction and analysis of pulmonary tree structures using data from High Resolution Computed Tomography (HRCT). Serially scanned two-dimensional slices of the lower left lobe of isolated dog lungs were stacked to create a volume of data. Airway and vascular trees were three-dimensionally extracted using a three dimensional seeded region growing algorithm based on difference in CT number between wall and lumen. To obtain quantitative data, we reduced each tree to its central axis. From the central axis, branch length is measured as the distance between two successive branch points, branch angle is measured as the angle produced by two daughter branches, and cross sectional area is measured from a plane perpendicular to the central axis point. Data derived from these methods can be used to localize and quantify structural differences both during changing physiologic conditions and in pathologic lungs.
Scalable Algorithms for Global Scale Remote Sensing Applications
NASA Astrophysics Data System (ADS)
Vatsavai, R. R.; Bhaduri, B. L.; Singh, N.
2015-12-01
Recent decade has witnessed major changes on the Earth, for example, deforestation, varying cropping and human settlement patterns, and crippling damages due to disasters. Accurate damage assessment caused by major natural and anthropogenic disasters is becoming critical due to increases in human and economic loss. This increase in loss of life and severe damages can be attributed to the growing population, as well as human migration to the disaster prone regions of the world. Rapid assessment of these changes and dissemination of accurate information is critical for creating an effective emergency response. Change detection using high-resolution satellite images is a primary tool in assessing damages, monitoring biomass and critical infrastructures, and identifying new settlements. Existing change detection methods suffer from registration errors and often based on pixel (location) wise comparison of spectral observations from single sensor. In this paper we present a novel probabilistic change detection framework based on patch comparison and a GPU implementation that supports near real-time rapid damage exploration capability.
Development of low level 226Ra analysis for live fish using gamma-ray spectrometry
NASA Astrophysics Data System (ADS)
Chandani, Z.; Prestwich, W. V.; Byun, S. H.
2017-06-01
A low level 226Ra analysis method for live fish was developed using a 4π NaI(Tl) gamma-ray spectrometer. In order to find out the best algorithm for accomplishing the lowest detection limit, the gamma-ray spectrum from a 226Ra point was collected and nine different methods were attempted for spectral analysis. The lowest detection limit of 0.99 Bq for an hour counting occurred when the spectrum was integrated in the energy region of 50-2520 keV. To extend 226Ra analysis to live fish, a Monte Carlo simulation model with a cylindrical fish in a water container was built using the MCNP code. From simulation results, the spatial distribution of the efficiency and the efficiency correction factor for the live fish model were determined. The MCNP model will be able to be conveniently modified when a different fish or container geometry is employed as fish grow up in real experiments.
Charging of nanoparticles in stationary plasma in a gas aggregation cluster source
NASA Astrophysics Data System (ADS)
Blažek, J.; Kousal, J.; Biederman, H.; Kylián, O.; Hanuš, J.; Slavínská, D.
2015-10-01
Clusters that grow into nanoparticles near the magnetron target of the gas aggregation cluster source (GAS) may acquire electric charge by collecting electrons and ions or through other mechanisms like secondary- or photo-electron emissions. The region of the GAS close to magnetron may be considered as stationary plasma. The steady state charge distribution on nanoparticles can be determined by means of three possible models—fluid model, kinetic model and model employing Monte Carlo simulations—of cluster charging. In the paper the mathematical and numerical aspects of these models are analyzed in detail and close links between them are clarified. Among others it is shown that Monte Carlo simulation may be considered as a particular numerical technique of solving kinetic equations. Similarly the equations of the fluid model result, after some approximation, from averaged kinetic equations. A new algorithm solving an in principle unlimited set of kinetic equations is suggested. Its efficiency is verified on physical models based on experimental input data.
Meta-modeling soil organic carbon sequestration potential and its application at regional scale.
Luo, Zhongkui; Wang, Enli; Bryan, Brett A; King, Darran; Zhao, Gang; Pan, Xubin; Bende-Michl, Ulrike
2013-03-01
Upscaling the results from process-based soil-plant models to assess regional soil organic carbon (SOC) change and sequestration potential is a great challenge due to the lack of detailed spatial information, particularly soil properties. Meta-modeling can be used to simplify and summarize process-based models and significantly reduce the demand for input data and thus could be easily applied on regional scales. We used the pre-validated Agricultural Production Systems sIMulator (APSIM) to simulate the impact of climate, soil, and management on SOC at 613 reference sites across Australia's cereal-growing regions under a continuous wheat system. We then developed a simple meta-model to link the APSIM-modeled SOC change to primary drivers, i.e., the amount of recalcitrant SOC, plant available water capacity of soil, soil pH, and solar radiation, temperature, and rainfall in the growing season. Based on high-resolution soil texture data and 8165 climate data points across the study area, we used the meta-model to assess SOC sequestration potential and the uncertainty associated with the variability of soil characteristics. The meta-model explained 74% of the variation of final SOC content as simulated by APSIM. Applying the meta-model to Australia's cereal-growing regions reveals regional patterns in SOC, with higher SOC stock in cool, wet regions. Overall, the potential SOC stock ranged from 21.14 to 152.71 Mg/ha with a mean of 52.18 Mg/ha. Variation of soil properties induced uncertainty ranging from 12% to 117% with higher uncertainty in warm, wet regions. In general, soils in Australia's cereal-growing regions under continuous wheat production were simulated as a sink of atmospheric carbon dioxide with a mean sequestration potential of 8.17 Mg/ha.
Research of grasping algorithm based on scara industrial robot
NASA Astrophysics Data System (ADS)
Peng, Tao; Zuo, Ping; Yang, Hai
2018-04-01
As the tobacco industry grows, facing the challenge of the international tobacco giant, efficient logistics service is one of the key factors. How to complete the tobacco sorting task of efficient economy is the goal of tobacco sorting and optimization research. Now the cigarette distribution system uses a single line to carry out the single brand sorting task, this article adopts a single line to realize the cigarette sorting task of different brands. Using scara robot special algorithm for sorting and packaging, the optimization scheme significantly enhances the indicators of smoke sorting system. Saving labor productivity, obviously improve production efficiency.
A Web-Based Search Service to Support Imaging Spectrometer Instrument Operations
NASA Technical Reports Server (NTRS)
Smith, Alexander; Thompson, David R.; Sayfi, Elias; Xing, Zhangfan; Castano, Rebecca
2013-01-01
Imaging spectrometers yield rich and informative data products, but interpreting them demands time and expertise. There is a continual need for new algorithms and methods for rapid first-draft analyses to assist analysts during instrument opera-tions. Intelligent data analyses can summarize scenes to draft geologic maps, searching images to direct op-erator attention to key features. This validates data quality while facilitating rapid tactical decision making to select followup targets. Ideally these algorithms would operate in seconds, never grow bored, and be free from observation bias about the kinds of mineral-ogy that will be found.
Algorithms for classification of astronomical object spectra
NASA Astrophysics Data System (ADS)
Wasiewicz, P.; Szuppe, J.; Hryniewicz, K.
2015-09-01
Obtaining interesting celestial objects from tens of thousands or even millions of recorded optical-ultraviolet spectra depends not only on the data quality but also on the accuracy of spectra decomposition. Additionally rapidly growing data volumes demands higher computing power and/or more efficient algorithms implementations. In this paper we speed up the process of substracting iron transitions and fitting Gaussian functions to emission peaks utilising C++ and OpenCL methods together with the NOSQL database. In this paper we implemented typical astronomical methods of detecting peaks in comparison to our previous hybrid methods implemented with CUDA.
Extending Wireless Rechargeable Sensor Network Life without Full Knowledge.
Najeeb, Najeeb W; Detweiler, Carrick
2017-07-17
When extending the life of Wireless Rechargeable Sensor Networks (WRSN), one challenge is charging networks as they grow larger. Overcoming this limitation will render a WRSN more practical and highly adaptable to growth in the real world. Most charging algorithms require a priori full knowledge of sensor nodes' power levels in order to determine the nodes that require charging. In this work, we present a probabilistic algorithm that extends the life of scalable WRSN without a priori power knowledge and without full network exploration. We develop a probability bound on the power level of the sensor nodes and utilize this bound to make decisions while exploring a WRSN. We verify the algorithm by simulating a wireless power transfer unmanned aerial vehicle, and charging a WRSN to extend its life. Our results show that, without knowledge, our proposed algorithm extends the life of a WRSN on average 90% of what an optimal full knowledge algorithm can achieve. This means that the charging robot does not need to explore the whole network, which enables the scaling of WRSN. We analyze the impact of network parameters on our algorithm and show that it is insensitive to a large range of parameter values.
Extending Wireless Rechargeable Sensor Network Life without Full Knowledge
Najeeb, Najeeb W.; Detweiler, Carrick
2017-01-01
When extending the life of Wireless Rechargeable Sensor Networks (WRSN), one challenge is charging networks as they grow larger. Overcoming this limitation will render a WRSN more practical and highly adaptable to growth in the real world. Most charging algorithms require a priori full knowledge of sensor nodes’ power levels in order to determine the nodes that require charging. In this work, we present a probabilistic algorithm that extends the life of scalable WRSN without a priori power knowledge and without full network exploration. We develop a probability bound on the power level of the sensor nodes and utilize this bound to make decisions while exploring a WRSN. We verify the algorithm by simulating a wireless power transfer unmanned aerial vehicle, and charging a WRSN to extend its life. Our results show that, without knowledge, our proposed algorithm extends the life of a WRSN on average 90% of what an optimal full knowledge algorithm can achieve. This means that the charging robot does not need to explore the whole network, which enables the scaling of WRSN. We analyze the impact of network parameters on our algorithm and show that it is insensitive to a large range of parameter values. PMID:28714936
Secured Hash Based Burst Header Authentication Design for Optical Burst Switched Networks
NASA Astrophysics Data System (ADS)
Balamurugan, A. M.; Sivasubramanian, A.; Parvathavarthini, B.
2017-12-01
The optical burst switching (OBS) is a promising technology that could meet the fast growing network demand. They are featured with the ability to meet the bandwidth requirement of applications that demand intensive bandwidth. OBS proves to be a satisfactory technology to tackle the huge bandwidth constraints, but suffers from security vulnerabilities. The objective of this proposed work is to design a faster and efficient burst header authentication algorithm for core nodes. There are two important key features in this work, viz., header encryption and authentication. Since the burst header is an important in optical burst switched network, it has to be encrypted; otherwise it is be prone to attack. The proposed MD5&RC4-4S based burst header authentication algorithm runs 20.75 ns faster than the conventional algorithms. The modification suggested in the proposed RC4-4S algorithm gives a better security and solves the correlation problems between the publicly known outputs during key generation phase. The modified MD5 recommended in this work provides 7.81 % better avalanche effect than the conventional algorithm. The device utilization result also shows the suitability of the proposed algorithm for header authentication in real time applications.
A View from Above Without Leaving the Ground
NASA Technical Reports Server (NTRS)
2004-01-01
In order to deliver accurate geospatial data and imagery to the remote sensing community, NASA is constantly developing new image-processing algorithms while refining existing ones for technical improvement. For 8 years, the NASA Regional Applications Center at Florida International University has served as a test bed for implementing and validating many of these algorithms, helping the Space Program to fulfill its strategic and educational goals in the area of remote sensing. The algorithms in return have helped the NASA Regional Applications Center develop comprehensive semantic database systems for data management, as well as new tools for disseminating geospatial information via the Internet.
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
NASA Astrophysics Data System (ADS)
Munandar, T. A.; Azhari; Mushdholifah, A.; Arsyad, L.
2017-03-01
Disparities in regional development methods are commonly identified using the Klassen Typology and Location Quotient. Both methods typically use the data on the gross regional domestic product (GRDP) sectors of a particular region. The Klassen approach can identify regional disparities by classifying the GRDP sector data into four classes, namely Quadrants I, II, III, and IV. Each quadrant indicates a certain level of regional disparities based on the GRDP sector value of the said region. Meanwhile, the Location Quotient (LQ) is usually used to identify potential sectors in a particular region so as to determine which sectors are potential and which ones are not potential. LQ classifies each sector into three classes namely, the basic sector, the non-basic sector with a competitive advantage, and the non-basic sector which can only meet its own necessities. Both Klassen Typology and LQ are unable to visualize the relationship of achievements in the development clearly of each region and sector. This research aimed to develop a new approach to the identification of disparities in regional development in the form of hierarchical clustering. The method of Hierarchical Agglomerative Clustering (HAC) was employed as the basis of the hierarchical clustering model for identifying disparities in regional development. Modifications were made to HAC using the Klassen Typology and LQ. Then, HAC which had been modified using the Klassen Typology was called MHACK while HAC which had been modified using LQ was called MACLoQ. Both algorithms can be used to identify regional disparities (MHACK) and potential sectors (MACLoQ), respectively, in the form of hierarchical clusters. Based on the MHACK in 31 regencies in Central Java Province, it is identified that 3 regencies (Demak, Jepara, and Magelang City) fall into the category of developed and rapidly-growing regions, while the other 28 regencies fall into the category of developed but depressed regions. Results of the MACLoQ implementation suggest that there is only 1 regency which falls into the basic-sector category (Banyumas), while the other regencies fall into the non-basic non-competitive sector category.
Recognition of strong earthquake-prone areas with a single learning class
NASA Astrophysics Data System (ADS)
Gvishiani, A. D.; Agayan, S. M.; Dzeboev, B. A.; Belov, I. O.
2017-05-01
This article presents a new Barrier recognition algorithm with learning, designed for recognition of earthquake-prone areas. In comparison to the Crust (Kora) algorithm, used by the classical EPA approach, the Barrier algorithm proceeds with learning just on one "pure" high-seismic class. The new algorithm operates in the space of absolute values of the geological-geophysical parameters of the objects. The algorithm is used for recognition of earthquake-prone areas with M ≥ 6.0 in the Caucasus region. Comparative analysis of the Crust and Barrier algorithms justifies their productive coherence.
North Atlantic Regional Water Resources Study. Main Report
1972-06-01
Areas of the Rgion are found in Annex 1 to this Report. These Area Programs have The NAR is presently growing at a slower rate been reformu!ld into...Physical Characteristics of The Region double to 86.2 million by the year 2020. The rate of growth is about 80 percent of that The North Atlantic Region...Use of 141 and Delaware River Basin (Area 15). wells and of waste water intakes, while small, is growing at an increased rate . Publicly supplied and
Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2011-01-01
A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.
DOGMA: A Disk-Oriented Graph Matching Algorithm for RDF Databases
NASA Astrophysics Data System (ADS)
Bröcheler, Matthias; Pugliese, Andrea; Subrahmanian, V. S.
RDF is an increasingly important paradigm for the representation of information on the Web. As RDF databases increase in size to approach tens of millions of triples, and as sophisticated graph matching queries expressible in languages like SPARQL become increasingly important, scalability becomes an issue. To date, there is no graph-based indexing method for RDF data where the index was designed in a way that makes it disk-resident. There is therefore a growing need for indexes that can operate efficiently when the index itself resides on disk. In this paper, we first propose the DOGMA index for fast subgraph matching on disk and then develop a basic algorithm to answer queries over this index. This algorithm is then significantly sped up via an optimized algorithm that uses efficient (but correct) pruning strategies when combined with two different extensions of the index. We have implemented a preliminary system and tested it against four existing RDF database systems developed by others. Our experiments show that our algorithm performs very well compared to these systems, with orders of magnitude improvements for complex graph queries.
An Algorithm for Pedestrian Detection in Multispectral Image Sequences
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.
2017-05-01
The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.
A multifaceted independent performance analysis of facial subspace recognition algorithms.
Bajwa, Usama Ijaz; Taj, Imtiaz Ahmad; Anwar, Muhammad Waqas; Wang, Xuan
2013-01-01
Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)(2)PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration.