Sample records for automatic high resolution

  1. Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data

    NASA Astrophysics Data System (ADS)

    Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.

    2015-04-01

    In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.

  2. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    NASA Astrophysics Data System (ADS)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  3. MASH Suite: a user-friendly and versatile software interface for high-resolution mass spectrometry data interpretation and visualization.

    PubMed

    Guner, Huseyin; Close, Patrick L; Cai, Wenxuan; Zhang, Han; Peng, Ying; Gregorich, Zachery R; Ge, Ying

    2014-03-01

    The rapid advancements in mass spectrometry (MS) instrumentation, particularly in Fourier transform (FT) MS, have made the acquisition of high-resolution and high-accuracy mass measurements routine. However, the software tools for the interpretation of high-resolution MS data are underdeveloped. Although several algorithms for the automatic processing of high-resolution MS data are available, there is still an urgent need for a user-friendly interface with functions that allow users to visualize and validate the computational output. Therefore, we have developed MASH Suite, a user-friendly and versatile software interface for processing high-resolution MS data. MASH Suite contains a wide range of features that allow users to easily navigate through data analysis, visualize complex high-resolution MS data, and manually validate automatically processed results. Furthermore, it provides easy, fast, and reliable interpretation of top-down, middle-down, and bottom-up MS data. MASH Suite is convenient, easily operated, and freely available. It can greatly facilitate the comprehensive interpretation and validation of high-resolution MS data with high accuracy and reliability.

  4. Automatic optimization high-speed high-resolution OCT retinal imaging at 1μm

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Liu, Xiyun; Miao, Dongkai; Lee, Sujin; Lee, Sieun; Bonora, Stefano; Zawadzki, Robert J.; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2015-03-01

    High-resolution OCT retinal imaging is important in providing visualization of various retinal structures to aid researchers in better understanding the pathogenesis of vision-robbing diseases. However, conventional optical coherence tomography (OCT) systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking optical coherence tomography (OCT) system with automatic optimization for high-resolution, extended-focal-range clinical retinal imaging. A variable-focus liquid lens was added to correct for de-focus in real-time. A GPU-accelerated segmentation and optimization was used to provide real-time layer-specific enface visualization as well as depth-specific focus adjustment. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the ONH, from which we extracted clinically-relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  5. A fast and automatic mosaic method for high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  6. Automatic Registration of GF4 Pms: a High Resolution Multi-Spectral Sensor on Board a Satellite on Geostationary Orbit

    NASA Astrophysics Data System (ADS)

    Gao, M.; Li, J.

    2018-04-01

    Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.

  7. Automatic Coregistration and orthorectification (ACRO) and subsequent mosaicing of NASA high-resolution imagery over the Mars MC11 quadrangle, using HRSC as a baseline

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian

    2018-02-01

    This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.

  8. Automatic public access to documents and maps stored on and internal secure system.

    NASA Astrophysics Data System (ADS)

    Trench, James; Carter, Mary

    2013-04-01

    The Geological Survey of Ireland operates a Document Management System for providing documents and maps stored internally in high resolution and in a high level secure environment, to an external service where the documents are automatically presented in a lower resolution to members of the public. Security is devised through roles and Individual Users where role level and folder level can be set. The application is an electronic document/data management (EDM) system which has a Geographical Information System (GIS) component integrated to allow users to query an interactive map of Ireland for data that relates to a particular area of interest. The data stored in the database consists of Bedrock Field Sheets, Bedrock Notebooks, Bedrock Maps, Geophysical Surveys, Geotechnical Maps & Reports, Groundwater, GSI Publications, Marine, Mine Records, Mineral Localities, Open File, Quaternary and Unpublished Reports. The Konfig application Tool is both an internal and public facing application. It acts as a tool for high resolution data entry which are stored in a high resolution vault. The public facing application is a mirror of the internal application and differs only in that the application furnishes high resolution data into low resolution format which is stored in a low resolution vault thus, making the data web friendly to the end user for download.

  9. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans.

    PubMed

    Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu

    2016-12-01

    Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  11. An automatic chip structure optical inspection system for electronic components

    NASA Astrophysics Data System (ADS)

    Song, Zhichao; Xue, Bindang; Liang, Jiyuan; Wang, Ke; Chen, Junzhang; Liu, Yunhe

    2018-01-01

    An automatic chip structure inspection system based on machine vision is presented to ensure the reliability of electronic components. It consists of four major modules, including a metallographic microscope, a Gigabit Ethernet high-resolution camera, a control system and a high performance computer. An auto-focusing technique is presented to solve the problem that the chip surface is not on the same focusing surface under the high magnification of the microscope. A panoramic high-resolution image stitching algorithm is adopted to deal with the contradiction between resolution and field of view, caused by different sizes of electronic components. In addition, we establish a database to storage and callback appropriate parameters to ensure the consistency of chip images of electronic components with the same model. We use image change detection technology to realize the detection of chip images of electronic components. The system can achieve high-resolution imaging for chips of electronic components with various sizes, and clearly imaging for the surface of chip with different horizontal and standardized imaging for ones with the same model, and can recognize chip defects.

  12. Evaluation of Pan-Sharpening Methods for Automatic Shadow Detection in High Resolution Images of Urban Areas

    NASA Astrophysics Data System (ADS)

    de Azevedo, Samara C.; Singh, Ramesh P.; da Silva, Erivaldo A.

    2017-04-01

    Finer spatial resolution of areas with tall objects within urban environment causes intense shadows that lead to wrong information in urban mapping. Due to the shadows, automatic detection of objects (such as buildings, trees, structures, towers) and to estimate the surface coverage from high spatial resolution is difficult. Thus, automatic shadow detection is the first necessary preprocessing step to improve the outcome of many remote sensing applications, particularly for high spatial resolution images. Efforts have been made to explore spatial and spectral information to evaluate such shadows. In this paper, we have used morphological attribute filtering to extract contextual relations in an efficient multilevel approach for high resolution images. The attribute selected for the filtering was the area estimated from shadow spectral feature using the Normalized Saturation-Value Difference Index (NSVDI) derived from pan-sharpening images. In order to assess the quality of fusion products and the influence on shadow detection algorithm, we evaluated three pan-sharpening methods - Intensity-Hue-Saturation (IHS), Principal Components (PC) and Gran-Schmidt (GS) through the image quality measures: Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Dimensionless Global Error in Synthesis (ERGAS) and Universal Image Quality Index (UIQI). Experimental results over Worldview II scene from São Paulo city (Brazil) show that GS method provides good correlation with original multispectral bands with no radiometric and contrast distortion. The automatic method using GS method for NSDVI generation clearly provide a clear distinction of shadows and non-shadows pixels with an overall accuracy more than 90%. The experimental results confirm the effectiveness of the proposed approach which could be used for further shadow removal and reliable for object recognition, land-cover mapping, 3D reconstruction, etc. especially in developing countries where land use and land cover are rapidly changing with tall objects within urban areas.

  13. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    PubMed Central

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  14. Integrating High-Resolution Taskable Imagery into a Sensorweb for Automatic Space-Based Monitoring of Flooding in Thailand

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mclaren, David; Doubleday, Joshua; Tran, Daniel; Tanpipat, Veerachai; Chitradon, Royol; Boonya-aroonnet, Surajate; Thanapakpawin, Porranee; Mandl, Daniel

    2012-01-01

    Several space-based assets (Terra, Aqua, Earth Observing One) have been integrated into a sensorweb to monitor flooding in Thailand. In this approach, the Moderate Imaging Spectrometer (MODIS) data from Terra and Aqua is used to perform broad-scale monitoring to track flooding at the regional level (250m/pixel) and EO-1 is autonomously tasked in response to alerts to acquire higher resolution (30m/pixel) Advanced Land Imager (ALI) data. This data is then automatically processed to derive products such as surface water extent and volumetric water estimates. These products are then automatically pushed to organizations in Thailand for use in damage estimation, relief efforts, and damage mitigation. More recently, this sensorweb structure has been used to request imagery, access imagery, and process high-resolution (several m to 30m), targetable asset imagery from commercial assets including Worldview-2, Ikonos, Radarsat-2, Landsat-7, and Geo-Eye-1. We describe the overall sensorweb framework as well as new workflows and products made possible via these extensions.

  15. Large-field high-resolution mosaic movies

    NASA Astrophysics Data System (ADS)

    Hammerschlag, Robert H.; Sliepen, Guus; Bettonvil, Felix C. M.; Jägers, Aswin P. L.; Sütterlin, Peter; Martin, Sara F.

    2012-09-01

    Movies with fields-of-view larger than normal for high-resolution telescopes will give a better understanding of processes on the Sun, such as filament and active region developments and their possible interactions. New active regions can influence, by their emergence, their environment to the extent of possibly serving as an igniter of the eruption of a nearby filament. A method to create a large field-of-view is to join several fields-of-view into a mosaic. Fields are imaged quickly one after another using fast telescope-pointing. Such a pointing cycle has been automated at the Dutch Open Telescope (DOT), a high-resolution solar telescope located on the Canary Island La Palma. The observer can draw with the computer mouse the desired total field in the guider-telescope image of the whole Sun. The guider telescope is equipped with an H-alpha filter and electronic enhancement of contrast in the image for good visibility of filaments and prominences. The number and positions of the subfields are calculated automatically and represented by an array of bright points indicating the subfield centers inside the drawn rectangle of the total field on the computer screen with the whole-sun image. When the exposures start the telescope repeats automatically the sequence of subfields. Automatic production of flats is also programmed including defocusing and fast motion over the solar disk of the image field. For the first time mosaic movies were programmed from stored information on automated telescope motions from one field to the next. The mosaic movies fill the gap between whole-sun images with limited resolution of synoptic telescopes including space instruments and small-field high-cadence movies of high-resolution solar telescopes.

  16. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    PubMed

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  17. Investigation on the separability of slums by multi-aspect TerraSAR-X dual-co-polarized high resolution spotlight images based on the multi-scale evaluation of local distributions

    NASA Astrophysics Data System (ADS)

    Schmitt, Andreas; Sieg, Tobias; Wurm, Michael; Taubenböck, Hannes

    2018-02-01

    Following recent advances in distinguishing settlements vs. non-settlement areas from latest SAR data, the question arises whether a further automatic intra-urban delineation and characterization of different structural types is possible. This paper studies the appearance of the structural type ;slums; in high resolution SAR images. Geocoded Kennaugh elements are used as backscatter information and Schmittlet indices as descriptor of local texture. Three cities with a significant share of slums (Cape Town, Manila, Mumbai) are chosen as test sites. These are imaged by TerraSAR-X in the dual-co-polarized high resolution spotlight mode in any available aspect angle. Representative distributions are estimated and fused by a robust approach. Our observations identify a high similarity of slums throughout all three test sites. The derived similarity maps are validated with reference data sets from visual interpretation and ground truth. The final validation strategy is based on completeness and correctness versus other classes in relation to the similarity. High accuracies (up to 87%) in identifying morphologic slums are reached for Cape Town. For Manila (up to 60%) and Mumbai (up to 54%), the distinction is more difficult due to their complex structural configuration. Concluding, high resolution SAR data can be suitable to automatically trace potential locations of slums. Polarimetric information and the incidence angle seem to have a negligible impact on the results whereas the intensity patterns and the passing direction of the satellite are playing a key role. Hence, the combination of intensity images (brightness) acquired from ascending and descending orbits together with Schmittlet indices (spatial pattern) promises best results. The transfer from the automatically recognized physical similarity to the semantic interpretation remains challenging.

  18. The role of the P3 and CNV components in voluntary and automatic temporal orienting: A high spatial-resolution ERP study.

    PubMed

    Mento, Giovanni

    2017-12-01

    A main distinction has been proposed between voluntary and automatic mechanisms underlying temporal orienting (TO) of selective attention. Voluntary TO implies the endogenous directing of attention induced by symbolic cues. Conversely, automatic TO is exogenously instantiated by the physical properties of stimuli. A well-known example of automatic TO is sequential effects (SEs), which refer to the adjustments in participants' behavioral performance as a function of the trial-by-trial sequential distribution of the foreperiod between two stimuli. In this study a group of healthy adults underwent a cued reaction time task purposely designed to assess both voluntary and automatic TO. During the task, both post-cue and post-target event-related potentials (ERPs) were recorded by means of a high spatial resolution EEG system. In the results of the post-cue analysis, the P3a and P3b were identified as two distinct ERP markers showing distinguishable spatiotemporal features and reflecting automatic and voluntary a priori expectancy generation, respectively. The brain source reconstruction further revealed that distinct cortical circuits supported these two temporally dissociable components. Namely, the voluntary P3b was supported by a left sensorimotor network, while the automatic P3a was generated by a more distributed frontoparietal circuit. Additionally, post-cue contingent negative variation (CNV) and post-target P3 modulations were observed as common markers of voluntary and automatic expectancy implementation and response selection, although partially dissociable neural networks subserved these two mechanisms. Overall, these results provide new electrophysiological evidence suggesting that distinct neural substrates can be recruited depending on the voluntary or automatic cognitive nature of the cognitive mechanisms subserving TO. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    PubMed

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  20. Spatial Classification of Orchards and Vineyards with High Spatial Resolution Panchromatic Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warner, Timothy; Steinmaus, Karen L.

    2005-02-01

    New high resolution single spectral band imagery offers the capability to conduct image classifications based on spatial patterns in imagery. A classification algorithm based on autocorrelation patterns was developed to automatically extract orchards and vineyards from satellite imagery. The algorithm was tested on IKONOS imagery over Granger, WA, which resulted in a classification accuracy of 95%.

  1. Techniques for automatic large scale change analysis of temporal multispectral imagery

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.

  2. A cost-effective strategy for nonoscillatory convection without clipping

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1990-01-01

    Clipping of narrow extrema and distortion of smooth profiles is a well known problem associated with so-called high resolution nonoscillatory convection schemes. A strategy is presented for accurately simulating highly convective flows containing discontinuities such as density fronts or shock waves, without distorting smooth profiles or clipping narrow local extrema. The convection algorithm is based on non-artificially diffusive third-order upwinding in smooth regions, with automatic adaptive stencil expansion to (in principle, arbitrarily) higher order upwinding locally, in regions of rapidly changing gradients. This is highly cost effective because the wider stencil is used only where needed-in isolated narrow regions. A recently developed universal limiter assures sharp monotonic resolution of discontinuities without introducing artificial diffusion or numerical compression. An adaptive discriminator is constructed to distinguish between spurious overshoots and physical peaks; this automatically relaxes the limiter near local turning points, thereby avoiding loss of resolution in narrow extrema. Examples are given for one-dimensional pure convection of scalar profiles at constant velocity.

  3. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  4. Attention Modifies Spatial Resolution According to Task Demands.

    PubMed

    Barbot, Antoine; Carrasco, Marisa

    2017-03-01

    How does visual attention affect spatial resolution? In texture-segmentation tasks, exogenous (involuntary) attention automatically increases resolution at the attended location, which improves performance where resolution is too low (at the periphery) but impairs performance where resolution is already too high (at central locations). Conversely, endogenous (voluntary) attention improves performance at all eccentricities, which suggests a more flexible mechanism. Here, using selective adaptation to spatial frequency, we investigated the mechanism by which endogenous attention benefits performance in resolution tasks. Participants detected a texture target that could appear at several eccentricities. Adapting to high or low spatial frequencies selectively affected performance in a manner consistent with changes in resolution. Moreover, adapting to high, but not low, frequencies mitigated the attentional benefit at central locations where resolution was too high; this shows that attention can improve performance by decreasing resolution. Altogether, our results indicate that endogenous attention benefits performance by modulating the contribution of high-frequency information in order to flexibly adjust spatial resolution according to task demands.

  5. Attention Modifies Spatial Resolution According to Task Demands

    PubMed Central

    Barbot, Antoine; Carrasco, Marisa

    2017-01-01

    How does visual attention affect spatial resolution? In texture-segmentation tasks, exogenous (involuntary) attention automatically increases resolution at the attended location, which improves performance where resolution is too low (at the periphery) but impairs performance where resolution is already too high (at central locations). Conversely, endogenous (voluntary) attention improves performance at all eccentricities, which suggests a more flexible mechanism. Here, using selective adaptation to spatial frequency, we investigated the mechanism by which endogenous attention benefits performance in resolution tasks. Participants detected a texture target that could appear at several eccentricities. Adapting to high or low spatial frequencies selectively affected performance in a manner consistent with changes in resolution. Moreover, adapting to high, but not low, frequencies mitigated the attentional benefit at central locations where resolution was too high; this shows that attention can improve performance by decreasing resolution. Altogether, our results indicate that endogenous attention benefits performance by modulating the contribution of high-frequency information in order to flexibly adjust spatial resolution according to task demands. PMID:28118103

  6. High-throughput automatic defect review for 300mm blank wafers with atomic force microscope

    NASA Astrophysics Data System (ADS)

    Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il

    2015-03-01

    While feature size in lithography process continuously becomes smaller, defect sizes on blank wafers become more comparable to device sizes. Defects with nm-scale characteristic size could be misclassified by automated optical inspection (AOI) and require post-processing for proper classification. Atomic force microscope (AFM) is known to provide high lateral and the highest vertical resolution by mechanical probing among all techniques. However, its low throughput and tip life in addition to the laborious efforts for finding the defects have been the major limitations of this technique. In this paper we introduce automatic defect review (ADR) AFM as a post-inspection metrology tool for defect study and classification for 300 mm blank wafers and to overcome the limitations stated above. The ADR AFM provides high throughput, high resolution, and non-destructive means for obtaining 3D information for nm-scale defect review and classification.

  7. Partial homogeneity based high-resolution nuclear magnetic resonance spectra under inhomogeneous magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Zhiliang; Lin, Liangjie; Lin, Yanqin, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn

    2014-09-29

    In nuclear magnetic resonance (NMR) technique, it is of great necessity and importance to obtain high-resolution spectra, especially under inhomogeneous magnetic fields. In this study, a method based on partial homogeneity is proposed for retrieving high-resolution one-dimensional NMR spectra under inhomogeneous fields. Signals from series of small voxels, which characterize high resolution due to small sizes, are recorded simultaneously. Then, an inhomogeneity correction algorithm is developed based on pattern recognition to correct the influence brought by field inhomogeneity automatically, thus yielding high-resolution information. Experiments on chemical solutions and fish spawn were carried out to demonstrate the performance of the proposedmore » method. The proposed method serves as a single radiofrequency pulse high-resolution NMR spectroscopy under inhomogeneous fields and may provide an alternative of obtaining high-resolution spectra of in vivo living systems or chemical-reaction systems, where performances of conventional techniques are usually degenerated by field inhomogeneity.« less

  8. Automatic extraction of tree crowns from aerial imagery in urban environment

    NASA Astrophysics Data System (ADS)

    Liu, Jiahang; Li, Deren; Qin, Xunwen; Yang, Jianfeng

    2006-10-01

    Traditionally, field-based investigation is the main method to investigate greenbelt in urban environment, which is costly and low updating frequency. In higher resolution image, the imagery structure and texture of tree canopy has great similarity in statistics despite the great difference in configurations of tree canopy, and their surface structures and textures of tree crown are very different from the other types. In this paper, we present an automatic method to detect tree crowns using high resolution image in urban environment without any apriori knowledge. Our method catches unique structure and texture of tree crown surface, use variance and mathematical expectation of defined image window to position the candidate canopy blocks coarsely, then analysis their inner structure and texture to refine these candidate blocks. The possible spans of all the feature parameters used in our method automatically generate from the small number of samples, and HOLE and its distribution as an important characteristics are introduced into refining processing. Also the isotropy of candidate image block and holes' distribution is integrated in our method. After introduction the theory of our method, aerial imageries were used ( with a resolution about 0.3m ) to test our method, and the results indicate that our method is an effective approach to automatically detect tree crown in urban environment.

  9. Advanced Ecosystem Mapping Techniques for Large Arctic Study Domains Using Calibrated High-Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Frost, G. V., Jr.

    2015-12-01

    Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.

  10. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  11. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  12. A highly versatile automatized setup for quantitative measurements of PHIP enhancements

    NASA Astrophysics Data System (ADS)

    Kiryutin, Alexey S.; Sauer, Grit; Hadjiali, Sara; Yurkovskaya, Alexandra V.; Breitzke, Hergen; Buntkowsky, Gerd

    2017-12-01

    The design and application of a versatile and inexpensive experimental extension to NMR spectrometers is described that allows to carry out highly reproducible PHIP experiments directly in the NMR sample tube, i.e. under PASADENA condition, followed by the detection of the NMR spectra of hyperpolarized products with high spectral resolution. Employing this high resolution it is feasible to study kinetic processes in the solution with high accuracy. As a practical example the dissolution of hydrogen gas in the liquid and the PHIP kinetics during the hydrogenation reaction of Fmoc-O-propargyl-L-tyrosine in acetone-d6 are monitored. The timing of the setup is fully controlled by the pulse-programmer of the NMR spectrometer. By flushing with an inert gas it is possible to efficiently quench the hydrogenation reaction in a controlled fashion and to detect the relaxation of hyperpolarization without a background reaction. The proposed design makes it possible to carry out PHIP experiments in an automatic mode and reliably determine the enhancement of polarized signals.

  13. Precision Targeting With a Tracking Adaptive Optics Scanning Laser Ophthalmoscope

    DTIC Science & Technology

    2006-01-01

    automatic high- resolution mosaic generation, and automatic blink detection and tracking re-lock were also tested. The system has the potential to become an...structures can lead to earlier detection of retinal diseases such as age-related macular degeneration (AMD) and diabetic retinopathy (DR). Combined...optics systems sense perturbations in the detected wave-front and apply corrections to an optical element that flatten the wave-front and allow near

  14. Georectification and snow classification of webcam images: potential for complementing satellite-derrived snow maps over Switzerland

    NASA Astrophysics Data System (ADS)

    Dizerens, Céline; Hüsler, Fabia; Wunderle, Stefan

    2016-04-01

    The spatial and temporal variability of snow cover has a significant impact on climate and environment and is of great socio-economic importance for the European Alps. Satellite remote sensing data is widely used to study snow cover variability and can provide spatially comprehensive information on snow cover extent. However, cloud cover strongly impedes the surface view and hence limits the number of useful snow observations. Outdoor webcam images not only offer unique potential for complementing satellite-derived snow retrieval under cloudy conditions but could also serve as a reference for improved validation of satellite-based approaches. Thousands of webcams are currently connected to the Internet and deliver freely available images with high temporal and spatial resolutions. To exploit the untapped potential of these webcams, a semi-automatic procedure was developed to generate snow cover maps based on webcam images. We used daily webcam images of the Swiss alpine region to apply, improve, and extend existing approaches dealing with the positioning of photographs within a terrain model, appropriate georectification, and the automatic snow classification of such photographs. In this presentation, we provide an overview of the implemented procedure and demonstrate how our registration approach automatically resolves the orientation of a webcam by using a high-resolution digital elevation model and the webcam's position. This allows snow-classified pixels of webcam images to be related to their real-world coordinates. We present several examples of resulting snow cover maps, which have the same resolution as the digital elevation model and indicate whether each grid cell is snow-covered, snow-free, or not visible from webcams' positions. The procedure is expected to work under almost any weather condition and demonstrates the feasibility of using webcams for the retrieval of high-resolution snow cover information.

  15. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  16. Automatic rocks detection and classification on high resolution images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Aboudan, A.; Pacifici, A.; Murana, A.; Cannarsa, F.; Ori, G. G.; Dell'Arciprete, I.; Allemand, P.; Grandjean, P.; Portigliotti, S.; Marcer, A.; Lorenzoni, L.

    2013-12-01

    High-resolution images can be used to obtain rocks location and size on planetary surfaces. In particular rock size-frequency distribution is a key parameter to evaluate the surface roughness, to investigate the geologic processes that formed the surface and to assess the hazards related with spacecraft landing. The manual search for rocks on high-resolution images (even for small areas) can be a very intensive work. An automatic or semi-automatic algorithm to identify rocks is mandatory to enable further processing as determining the rocks presence, size, height (by means of shadows) and spatial distribution over an area of interest. Accurate rocks and shadows contours localization are the key steps for rock detection. An approach to contour detection based on morphological operators and statistical thresholding is presented in this work. The identified contours are then fitted using a proper geometric model of the rocks or shadows and used to estimate salient rocks parameters (position, size, area, height). The performances of this approach have been evaluated both on images of Martian analogue area of Morocco desert and on HiRISE images. Results have been compared with ground truth obtained by means of manual rock mapping and proved the effectiveness of the algorithm. The rock abundance and rocks size-frequency distribution derived on selected HiRISE images have been compared with the results of similar analyses performed for the landing site certification of Mars landers (Viking, Pathfinder, MER, MSL) and with the available thermal data from IRTM and TES.

  17. 2dx_automator: implementation of a semiautomatic high-throughput high-resolution cryo-electron crystallography pipeline.

    PubMed

    Scherer, Sebastian; Kowal, Julia; Chami, Mohamed; Dandey, Venkata; Arheit, Marcel; Ringler, Philippe; Stahlberg, Henning

    2014-05-01

    The introduction of direct electron detectors (DED) to cryo-electron microscopy has tremendously increased the signal-to-noise ratio (SNR) and quality of the recorded images. We discuss the optimal use of DEDs for cryo-electron crystallography, introduce a new automatic image processing pipeline, and demonstrate the vast improvement in the resolution achieved by the use of both together, especially for highly tilted samples. The new processing pipeline (now included in the software package 2dx) exploits the high SNR and frame readout frequency of DEDs to automatically correct for beam-induced sample movement, and reliably processes individual crystal images without human interaction as data are being acquired. A new graphical user interface (GUI) condenses all information required for quality assessment in one window, allowing the imaging conditions to be verified and adjusted during the data collection session. With this new pipeline an automatically generated unit cell projection map of each recorded 2D crystal is available less than 5 min after the image was recorded. The entire processing procedure yielded a three-dimensional reconstruction of the 2D-crystallized ion-channel membrane protein MloK1 with a much-improved resolution of 5Å in-plane and 7Å in the z-direction, within 2 days of data acquisition and simultaneous processing. The results obtained are superior to those delivered by conventional photographic film-based methodology of the same sample, and demonstrate the importance of drift-correction. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  18. A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers

    NASA Astrophysics Data System (ADS)

    Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang

    1990-02-01

    In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.

  19. Use of an automatic earth resistivity system for detection of abandoned mine workings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, W.R.; Burdick, R.

    1982-04-01

    Under the sponsorship of the US Bureau of Mines, a surface-operated automatic high resolution earth resistivity system and associated computer data processing techniques have been designed and constructed for use as a potential means of detecting abandoned coal mine workings. The hardware and software aspects of the new system are described together with applications of the method to the survey and mapping of abandoned mine workings.

  20. MR-based source localization for MR-guided HDR brachytherapy

    NASA Astrophysics Data System (ADS)

    Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.

    2018-04-01

    For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.

  1. High resolution upgrade of the ATF damping ring BPM system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terunuma, N.; Urakawa, J.; /KEK, Tsukuba

    2008-05-01

    A beam position monitor (BPM) upgrade at the KEK Accelerator Test Facility (ATF) damping ring has been accomplished in its first stage, carried out by a KEK/FNAL/SLAC collaboration under the umbrella of the global ILC R&D effort. The upgrade consists of a high resolution, high reproducibility read-out system, based on analog and digital downconversion techniques, digital signal processing, and also tests a new automatic gain error correction schema. The technical concept and realization, as well as preliminary results of beam studies are presented.

  2. Dorsomedial striatum involvement in regulating conflict between current and presumed outcomes.

    PubMed

    Mestres-Missé, Anna; Bazin, Pierre-Louis; Trampel, Robert; Turner, Robert; Kotz, Sonja A

    2014-09-01

    The balance between automatic and controlled processing is essential to human flexible but optimal behavior. On the one hand, the automation of habitual behavior and processing is indispensable, and, on the other hand, strategic processing is needed in light of unexpected, conflicting, or new situations. Using ultra-high-field high-resolution functional magnetic resonance imaging (7T-fMRI), the present study examined the role of subcortical structures in mediating this balance. Participants were asked to judge the congruency of sentences containing a semantically ambiguous or unambiguous word. Ambiguous sentences had three possible resolutions: dominant meaning, subordinate meaning, and incongruent. The dominant interpretation represents the most habitual response, whereas both the subordinate and incongruent options clash with this automatic response, and, hence, require cognitive control. Moreover, the subordinate resolution entails a less expected but correct outcome, while the incongruent condition is simply wrong. The current results reveal the involvement of the anterior dorsomedial striatum in modulating and resolving conflict between actual and expected outcomes, and highlight the importance of cortical and subcortical cooperation in this process. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Multi-stage robust scheme for citrus identification from high resolution airborne images

    NASA Astrophysics Data System (ADS)

    Amorós-López, Julia; Izquierdo Verdiguier, Emma; Gómez-Chova, Luis; Muñoz-Marí, Jordi; Zoilo Rodríguez-Barreiro, Jorge; Camps-Valls, Gustavo; Calpe-Maravilla, Javier

    2008-10-01

    Identification of land cover types is one of the most critical activities in remote sensing. Nowadays, managing land resources by using remote sensing techniques is becoming a common procedure to speed up the process while reducing costs. However, data analysis procedures should satisfy the accuracy figures demanded by institutions and governments for further administrative actions. This paper presents a methodological scheme to update the citrus Geographical Information Systems (GIS) of the Comunidad Valenciana autonomous region, Spain). The proposed approach introduces a multi-stage automatic scheme to reduce visual photointerpretation and ground validation tasks. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution (VHR) images (0.5m) acquired in the visible and near infrared. Next, several automatic classifiers (decision trees, multilayer perceptron, and support vector machines) are trained and combined to improve the final accuracy of the results. The proposed strategy fulfills the high accuracy demanded by policy makers by means of combining automatic classification methods with visual photointerpretation available resources. A level of confidence based on the agreement between classifiers allows us an effective management by fixing the quantity of parcels to be reviewed. The proposed methodology can be applied to similar problems and applications.

  4. Automated imaging of cellular spheroids with selective plane illumination microscopy on a chip (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Paiè, Petra; Bassi, Andrea; Bragheri, Francesca; Osellame, Roberto

    2017-02-01

    Selective plane illumination microscopy (SPIM) is an optical sectioning technique that allows imaging of biological samples at high spatio-temporal resolution. Standard SPIM devices require dedicated set-ups, complex sample preparation and accurate system alignment, thus limiting the automation of the technique, its accessibility and throughput. We present a millimeter-scaled optofluidic device that incorporates selective plane illumination and fully automatic sample delivery and scanning. To this end an integrated cylindrical lens and a three-dimensional fluidic network were fabricated by femtosecond laser micromachining into a single glass chip. This device can upgrade any standard fluorescence microscope to a SPIM system. We used SPIM on a CHIP to automatically scan biological samples under a conventional microscope, without the need of any motorized stage: tissue spheroids expressing fluorescent proteins were flowed in the microchannel at constant speed and their sections were acquired while passing through the light sheet. We demonstrate high-throughput imaging of the entire sample volume (with a rate of 30 samples/min), segmentation and quantification in thick (100-300 μm diameter) cellular spheroids. This optofluidic device gives access to SPIM analyses to non-expert end-users, opening the way to automatic and fast screening of a high number of samples at subcellular resolution.

  5. Using support vector machines to improve elemental ion identification in macromolecular crystal structures

    DOE PAGES

    Morshed, Nader; Echols, Nathaniel; Adams, Paul D.

    2015-04-25

    In the process of macromolecular model building, crystallographers must examine electron density for isolated atoms and differentiate sites containing structured solvent molecules from those containing elemental ions. This task requires specific knowledge of metal-binding chemistry and scattering properties and is prone to error. A method has previously been described to identify ions based on manually chosen criteria for a number of elements. Here, the use of support vector machines (SVMs) to automatically classify isolated atoms as either solvent or one of various ions is described. Two data sets of protein crystal structures, one containing manually curated structures deposited with anomalousmore » diffraction data and another with automatically filtered, high-resolution structures, were constructed. On the manually curated data set, an SVM classifier was able to distinguish calcium from manganese, zinc, iron and nickel, as well as all five of these ions from water molecules, with a high degree of accuracy. Additionally, SVMs trained on the automatically curated set of high-resolution structures were able to successfully classify most common elemental ions in an independent validation test set. This method is readily extensible to other elemental ions and can also be used in conjunction with previous methods based on a priori expectations of the chemical environment and X-ray scattering.« less

  6. The Influence of Endmember Selection Method in Extracting Impervious Surface from Airborne Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Wang, J.; Feng, B.

    2016-12-01

    Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.

  7. The laboratory demonstration and signal processing of the inverse synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Gao, Si; Zhang, ZengHui; Xu, XianWen; Yu, WenXian

    2017-10-01

    This paper presents a coherent inverse synthetic-aperture imaging ladar(ISAL)system to obtain high resolution images. A balanced coherent optics system in laboratory is built with binary phase coded modulation transmit waveform which is different from conventional chirp. A whole digital signal processing solution is proposed including both quality phase gradient autofocus(QPGA) algorithm and cubic phase function(CPF) algorithm. Some high-resolution well-focused ISAL images of retro-reflecting targets are shown to validate the concepts. It is shown that high resolution images can be achieved and the influences from vibrations of platform involving targets and radar can be automatically compensated by the distinctive laboratory system and digital signal process.

  8. Ship Detection Using High Resolution Satellite Imagery and Space-Based AIS

    NASA Astrophysics Data System (ADS)

    Hannevik, Tonje Nanette; Skauen, Andreas N.; Olsen, R. B.

    2013-03-01

    This paper presents a trial carried out in the Malangen area close to Tromsø city in the north of Norway in September 2010. High resolution Synthetic Aperture Radar (SAR) images from RADARSAT-2 were used to analyse how SAR images and cooperative reporting can be combined. Data from the Automatic Identification System, both land-based and space-based, have been used to identify detected vessels in the SAR images. The paper presents results of ship detection in high resolution RADARSAT-2 Standard Quad-Pol images, and how these results together with land-based and space-based AIS can be used. Some examples of tracking of vessels are also shown.

  9. Investigating the Potential of Deep Neural Networks for Large-Scale Classification of Very High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.

    2017-05-01

    Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.

  10. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  11. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  12. Automatic Detection and Positioning of Ground Control Points Using TerraSAR-X Multiaspect Acquisitions

    NASA Astrophysics Data System (ADS)

    Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang

    2018-05-01

    Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.

  13. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    PubMed Central

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283

  14. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    PubMed

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  15. Use of an automatic resistivity system for detecting abandoned mine workings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, W.R.; Burdick, R.G.

    1983-01-01

    A high-resolution earth resistivity system has been designed and constructed for use as a means of detecting abandoned coal mine workings. The automatic pole-dipole earth resistivity technique has already been applied to the detection of subsurface voids for military applications. The hardware and software of the system are described, together with applications for surveying and mapping abandoned coal mine workings. Field tests are presented to illustrate the detection of both air-filled and water-filled mine workings.

  16. Automated Detection of Salt Marsh Platforms : a Topographic Method

    NASA Astrophysics Data System (ADS)

    Goodwin, G.; Mudd, S. M.; Clubb, F. J.

    2017-12-01

    Monitoring the topographic evolution of coastal marshes is a crucial step toward improving the management of these valuable landscapes under the pressure of relative sea level rise and anthropogenic modification. However, determining their geometrically complex boundaries currently relies on spectral vegetation detection methods or requires labour-intensive field surveys and digitisation.We propose a novel method to reproducibly isolate saltmarsh scarps and platforms from a DEM. Field observations and numerical models show that saltmarshes mature into sub-horizontal platforms delineated by sub-vertical scarps: based on this premise, we identify scarps as lines of local maxima on a slope*relief raster, then fill landmasses from the scarps upward, thus isolating mature marsh platforms. Non-dimensional search parameters allow batch-processing of data without recalibration. We test our method using lidar-derived DEMs of six saltmarshes in England with varying tidal ranges and geometries, for which topographic platforms were manually isolated from tidal flats. Agreement between manual and automatic segregation exceeds 90% for resolutions of 1m, with all but one sites maintaining this performance for resolutions up to 3.5m. For resolutions of 1m, automatically detected platforms are comparable in surface area and elevation distribution to digitised platforms. We also find that our method allows the accurate detection of local bloc failures 3 times larger than the DEM resolution.Detailed inspection reveals that although tidal creeks were digitised as part of the marsh platform, automatic detection classifies them as part of the tidal flat, causing an increase in false negatives and overall platform perimeter. This suggests our method would benefit from a combination with existing creek detection algorithms. Fallen blocs and pioneer zones are inconsistently identified, particularly in macro-tidal marshes, leading to differences between digitisation and the automated method: this also suggests that these areas must be carefully considered when analysing erosion and accretion processes. Ultimately, we have shown that automatic detection of marsh platforms from high-resolution topography is possible and sufficient to monitor and analyse topographic evolution.

  17. Automatic Traffic Advisory and Resolution Service (ATARS) Multi-Site Algorithms. Revision 1,

    DTIC Science & Technology

    1980-10-01

    Summary Concept Description The Automatic Traffic Advisory and Resolution Service is a ground based collision avoidance system to be implemented in the...capability. A ground based computer processes the data and continuously provides proximity warning information and, when necessary, resolution advisories to...of ground- based air traffic control which provides proximity warning and separation services to uncontrolled aircraft in a given region of airspace. it

  18. Classification of cloud fields based on textural characteristics

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Chen, D. W.

    1987-01-01

    The present study reexamines the applicability of texture-based features for automatic cloud classification using very high spatial resolution (57 m) Landsat multispectral scanner digital data. It is concluded that cloud classification can be accomplished using only a single visible channel.

  19. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  20. Using support vector machines to improve elemental ion identification in macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morshed, Nader; Lawrence Berkeley National Laboratory, Berkeley, CA 94720; Echols, Nathaniel, E-mail: nechols@lbl.gov

    2015-05-01

    A method to automatically identify possible elemental ions in X-ray crystal structures has been extended to use support vector machine (SVM) classifiers trained on selected structures in the PDB, with significantly improved sensitivity over manually encoded heuristics. In the process of macromolecular model building, crystallographers must examine electron density for isolated atoms and differentiate sites containing structured solvent molecules from those containing elemental ions. This task requires specific knowledge of metal-binding chemistry and scattering properties and is prone to error. A method has previously been described to identify ions based on manually chosen criteria for a number of elements. Here,more » the use of support vector machines (SVMs) to automatically classify isolated atoms as either solvent or one of various ions is described. Two data sets of protein crystal structures, one containing manually curated structures deposited with anomalous diffraction data and another with automatically filtered, high-resolution structures, were constructed. On the manually curated data set, an SVM classifier was able to distinguish calcium from manganese, zinc, iron and nickel, as well as all five of these ions from water molecules, with a high degree of accuracy. Additionally, SVMs trained on the automatically curated set of high-resolution structures were able to successfully classify most common elemental ions in an independent validation test set. This method is readily extensible to other elemental ions and can also be used in conjunction with previous methods based on a priori expectations of the chemical environment and X-ray scattering.« less

  1. Adaptive hyperspectral imager: design, modeling, and control

    NASA Astrophysics Data System (ADS)

    McGregor, Scot; Lacroix, Simon; Monmayrant, Antoine

    2015-08-01

    An adaptive, hyperspectral imager is presented. We propose a system with easily adaptable spectral resolution, adjustable acquisition time, and high spatial resolution which is independent of spectral resolution. The system yields the possibility to define a variety of acquisition schemes, and in particular near snapshot acquisitions that may be used to measure the spectral content of given or automatically detected regions of interest. The proposed system is modelled and simulated, and tests on a first prototype validate the approach to achieve near snapshot spectral acquisitions without resorting to any computationally heavy post-processing, nor cumbersome calibration

  2. Enhanced High Resolution RBS System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollock, Thomas J.; Hass, James A.; Klody, George M.

    2011-06-01

    Improvements in full spectrum resolution with the second NEC high resolution RBS system are summarized. Results for 50 A ring TiN/HfO films on Si yielding energy resolution on the order of 1 keV are also presented. Detector enhancements include improved pulse processing electronics, upgraded shielding for the MCP/RAE detector, and reduced noise generated from pumping. Energy resolution measurements on spectra front edge coupled with calculations using 0.4mStr solid angle show that beam energy spread at 400 KeV from the Pelletron registered accelerator is less than 100 eV. To improve user throughput, magnet control has been added to the automatic datamore » collection. Depth profiles derived from experimental data are discussed. For the thin films profiled, depth resolutions were on the Angstrom level with the non-linear energy/channel conversions ranging from 100 to 200 eV.« less

  3. Towards native-state imaging in biological context in the electron microscope

    PubMed Central

    Weston, Anne E.; Armer, Hannah E. J.

    2009-01-01

    Modern cell biology is reliant on light and fluorescence microscopy for analysis of cells, tissues and protein localisation. However, these powerful techniques are ultimately limited in resolution by the wavelength of light. Electron microscopes offer much greater resolution due to the shorter effective wavelength of electrons, allowing direct imaging of sub-cellular architecture. The harsh environment of the electron microscope chamber and the properties of the electron beam have led to complex chemical and mechanical preparation techniques, which distance biological samples from their native state and complicate data interpretation. Here we describe recent advances in sample preparation and instrumentation, which push the boundaries of high-resolution imaging. Cryopreparation, cryoelectron microscopy and environmental scanning electron microscopy strive to image samples in near native state. Advances in correlative microscopy and markers enable high-resolution localisation of proteins. Innovation in microscope design has pushed the boundaries of resolution to atomic scale, whilst automatic acquisition of high-resolution electron microscopy data through large volumes is finally able to place ultrastructure in biological context. PMID:19916039

  4. An automatic modular procedure to generate high-resolution earthquake catalogues: application to the Alto Tiberina Near Fault Observatory (TABOO), Italy.

    NASA Astrophysics Data System (ADS)

    Di Stefano, R.; Chiaraluce, L.; Valoroso, L.; Waldhauser, F.; Latorre, D.; Piccinini, D.; Tinti, E.

    2014-12-01

    The Alto Tiberina Near Fault Observatory (TABOO) in the upper Tiber Valley (northern Appennines) is a INGV research infrastructure devoted to the study of preparatory processes and deformation characteristics of the Alto Tiberina Fault (ATF), a 60 km long, low-angle normal fault active since the Quaternary. The TABOO seismic network, covering an area of 120 × 120 km, consists of 60 permanent surface and 250 m deep borehole stations equipped with 3-components, 0.5s to 120s velocimeters, and strong motion sensors. Continuous seismic recordings are transmitted in real-time to the INGV, where we set up an automatic procedure that produces high-resolution earthquakes catalogues (location, magnitudes, 1st motion polarities) in near-real-time. A sensitive event detection engine running on the continuous data stream is followed by advanced phase identification, arrival-time picking, and quality assessment algorithms (MPX). Pick weights are determined from a statistical analysis of a set of predictors designed to correctly apply an a-priori chosen weighting scheme. The MPX results are used to routinely update earthquakes catalogues based on a variety of (1D and 3D) velocity models and location techniques. We are also applying the DD-RT procedure which uses cross-correlation and double-difference methods in real-time to relocate events with high precision relative to a high-resolution background catalog. P- and S-onset and location information are used to automatically compute focal mechanisms, VP/VS variations in space and time, and periodically update 3D VP and VP/VS tomographic models. We present results from four years of operation, during which this monitoring system analyzed over 1.2 million detections and recovered ~60,000 earthquakes at a detection threshold of ML 0.5. The high-resolution information is being used to study changes in seismicity patterns and fault and rock properties along the ATF in space and time, and to elaborate ground shaking scenarios adopting diverse slip distributions and rupture directivity models.

  5. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  6. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.

    PubMed

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  7. Pre-processing liquid chromatography/high-resolution mass spectrometry data: extracting pure mass spectra by deconvolution from the invariance of isotopic distribution.

    PubMed

    Krishnan, Shaji; Verheij, Elwin E R; Bas, Richard C; Hendriks, Margriet W B; Hankemeier, Thomas; Thissen, Uwe; Coulier, Leon

    2013-05-15

    Mass spectra obtained by deconvolution of liquid chromatography/high-resolution mass spectrometry (LC/HRMS) data can be impaired by non-informative mass-over-charge (m/z) channels. This impairment of mass spectra can have significant negative influence on further post-processing, like quantification and identification. A metric derived from the knowledge of errors in isotopic distribution patterns, and quality of the signal within a pre-defined mass chromatogram block, has been developed to pre-select all informative m/z channels. This procedure results in the clean-up of deconvoluted mass spectra by maintaining the intensity counts from m/z channels that originate from a specific compound/molecular ion, for example, molecular ion, adducts, (13) C-isotopes, multiply charged ions and removing all m/z channels that are not related to the specific peak. The methodology has been successfully demonstrated for two sets of high-resolution LC/MS data. The approach described is therefore thought to be a useful tool in the automatic processing of LC/HRMS data. It clearly shows the advantages compared to other approaches like peak picking and de-isotoping in the sense that all information is retained while non-informative data is removed automatically. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Developments in the CCP4 molecular-graphics project.

    PubMed

    Potterton, Liz; McNicholas, Stuart; Krissinel, Eugene; Gruber, Jan; Cowtan, Kevin; Emsley, Paul; Murshudov, Garib N; Cohen, Serge; Perrakis, Anastassis; Noble, Martin

    2004-12-01

    Progress towards structure determination that is both high-throughput and high-value is dependent on the development of integrated and automatic tools for electron-density map interpretation and for the analysis of the resulting atomic models. Advances in map-interpretation algorithms are extending the resolution regime in which fully automatic tools can work reliably, but at present human intervention is required to interpret poor regions of macromolecular electron density, particularly where crystallographic data is only available to modest resolution [for example, I/sigma(I) < 2.0 for minimum resolution 2.5 A]. In such cases, a set of manual and semi-manual model-building molecular-graphics tools is needed. At the same time, converting the knowledge encapsulated in a molecular structure into understanding is dependent upon visualization tools, which must be able to communicate that understanding to others by means of both static and dynamic representations. CCP4 mg is a program designed to meet these needs in a way that is closely integrated with the ongoing development of CCP4 as a program suite suitable for both low- and high-intervention computational structural biology. As well as providing a carefully designed user interface to advanced algorithms of model building and analysis, CCP4 mg is intended to present a graphical toolkit to developers of novel algorithms in these fields.

  9. Fusion of multi-source remote sensing data for agriculture monitoring tasks

    NASA Astrophysics Data System (ADS)

    Skakun, S.; Franch, B.; Vermote, E.; Roger, J. C.; Becker Reshef, I.; Justice, C. O.; Masek, J. G.; Murphy, E.

    2016-12-01

    Remote sensing data is essential source of information for enabling monitoring and quantification of crop state at global and regional scales. Crop mapping, state assessment, area estimation and yield forecasting are the main tasks that are being addressed within GEO-GLAM. Efficiency of agriculture monitoring can be improved when heterogeneous multi-source remote sensing datasets are integrated. Here, we present several case studies of utilizing MODIS, Landsat-8 and Sentinel-2 data along with meteorological data (growing degree days - GDD) for winter wheat yield forecasting, mapping and area estimation. Archived coarse spatial resolution data, such as MODIS, VIIRS and AVHRR, can provide daily global observations that coupled with statistical data on crop yield can enable the development of empirical models for timely yield forecasting at national level. With the availability of high-temporal and high spatial resolution Landsat-8 and Sentinel-2A imagery, course resolution empirical yield models can be downscaled to provide yield estimates at regional and field scale. In particular, we present the case study of downscaling the MODIS CMG based generalized winter wheat yield forecasting model to high spatial resolution data sets, namely harmonized Landsat-8 - Sentinel-2A surface reflectance product (HLS). Since the yield model requires corresponding in season crop masks, we propose an automatic approach to extract winter crop maps from MODIS NDVI and MERRA2 derived GDD using Gaussian mixture model (GMM). Validation for the state of Kansas (US) and Ukraine showed that the approach can yield accuracies > 90% without using reference (ground truth) data sets. Another application of yearly derived winter crop maps is their use for stratification purposes within area frame sampling for crop area estimation. In particular, one can simulate the dependence of error (coefficient of variation) on the number of samples and strata size. This approach was used for estimating the area of winter crops in Ukraine for 2013-2016. The GMM-GDD approach is further extended for HLS data to provide automatic winter crop mapping at 30 m resolution for crop yield model and area estimation. In case of persistent cloudiness, addition of Sentinel-1A synthetic aperture radar (SAR) images is explored for automatic winter crop mapping.

  10. Systematic recover of long high-resolution rainfall time series recorded by pluviographs during the 20th century.

    NASA Astrophysics Data System (ADS)

    Delitala, Alessandro M. S.; Deidda, Roberto; Mascaro, Giuseppe; Piga, Enrico; Querzoli, Giorgio

    2010-05-01

    During most of the 20th century, precipitation has been continuously measured by means of the so-called "pluviographs", i.e. rain gauges including a mechanical apparatus for continuously recording the depth of water from precipitation on specific strip charts, usually on a weekly basis. The signal recorded on such strips was visually examined by trained personnel on a regular basis, in order to extract the daily precipitation totals and the maximum precipitation intensities over short periods (from a few minutes to hours). The rest of the high-resolution information contained in the signal was usually not extracted, except for specific cases. A systematic recovering of the entire information at high temporal resolution contained in these precipitation signals would provide a fundamental database to improve the characterization of historical rainfall climatology during the previous century. The Department of Land Engineering of the University of Cagliari has recently developed and tested an automatic software, based on image analysis techniques, which is able to acquire the scanned images of the pluviograph strip charts, to automatically digitise the signal and to produce a digital database of continuous precipitation records at the highest possible temporal resolution, i.e. 5 to 10 minutes. Along with that, a significant amount of daily precipitation totals from the late 19th and the 20th century, either elaborated from pluviograph strip charts or simply derived from bucket rain gauges, still exists in paper form, but it has never been digitalized. Within a project partly-funded by the Operational Programme of the European Union "Italia-Francia Marittimo", the Regional Environmental Protection Agency of Sardinia and the University of Cagliari will recover both the high-resolution rainfall signals and the older time series of daily totals recorded by a large number of pluviographs belonging to the historical monitoring networks of the island of Sardinia. Such data will then be used to construct the high-resolution climatology of precipitation over Sardinia, both assuming stationary climate and slowly varying climate. Specific attention will be devoted to a set of critical hydrological basins, often affected by intense precipitation and flash floods. All information will then be made available to researchers, regional officers, technicians (e.g. hydraulic engineers) and the greater public interested into such information. The present poster describes the general scope of the E.U. project and the specific activities in the field of climatology of Sardinia rainfall that will be conducted as well as the expected results. A section will be dedicated to show how the pluviograph strips are automatically digitized.

  11. Automatic Detection of Changes on Mars Surface from High-Resolution Orbital Images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter

    2017-04-01

    Over the last 40 years Mars has been extensively mapped by several NASA and ESA orbital missions, generating a large image dataset comprised of approximately 500,000 high-resolution images (of <100m resolution). The overall area mapped from orbital imagery is approximately 6 times the overall surface of Mars [1]. The multi-temporal coverage of Martian surface allows a visual inspection of the surface to identify dynamic phenomena, i.e. surface features that change over time, such as slope streaks [2], recurring slope lineae [3], new impact craters [4], etc. However, visual inspection for change detection is a limited approach, since it requires extensive use of human resources, which is very difficult to achieve when dealing with a rapidly increasing volume of data. Although citizen science can be employed for training and verification it is unsuitable for planetwide systematic change detection. In this work, we introduce a novel approach in planetary image change detection, which involves a batch-mode automatic change detection pipeline that identifies regions that have changed. This is tested in anger, on tens of thousands of high-resolution images over the MC11 quadrangle [5], acquired by CTX, HRSC, THEMIS-VIS and MOC-NA instruments [1]. We will present results which indicate a substantial level of activity in this region of Mars, including instances of dynamic natural phenomena that haven't been cataloged in the planetary science literature before. We will demonstrate the potential and usefulness of such an automatic approach in planetary science change detection. Acknowledgments: The research leading to these results has received funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n° 607379. References: [1] P. Sidiropoulos and J. - P. Muller (2015) On the status of orbital high-resolution repeat imaging of Mars for the observation of dynamic surface processes. Planetary and Space Science, 117: 207-222. [2] O. Aharonson, et al. (2003) Slope streak formation and dust deposition rates on Mars. Journal of Geophysical Research: Planets, 108(E12):5138 [3] A. McEwen, et al. (2011) Seasonal flows on warm martian slopes. Science, 333 (6043): 740-743. [4] S. Byrne, et al. (2009) Distribution of mid-latitude ground ice on mars from new impact craters. Science, 325(5948):1674-1676. [5] K. Gwinner, et al (2016) The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites. Planetary and Space Science, 126: 93-138.

  12. Seamless presentation capture, indexing, and management

    NASA Astrophysics Data System (ADS)

    Hilbert, David M.; Cooper, Matthew; Denoue, Laurent; Adcock, John; Billsus, Daniel

    2005-10-01

    Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.

  13. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  14. MetMSLine: an automated and fully integrated pipeline for rapid processing of high-resolution LC-MS metabolomic datasets.

    PubMed

    Edmands, William M B; Barupal, Dinesh K; Scalbert, Augustin

    2015-03-01

    MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker-MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC-MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. © The Author 2014. Published by Oxford University Press.

  15. MetMSLine: an automated and fully integrated pipeline for rapid processing of high-resolution LC–MS metabolomic datasets

    PubMed Central

    Edmands, William M. B.; Barupal, Dinesh K.; Scalbert, Augustin

    2015-01-01

    Summary: MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker—MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). Availability and implementation: All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC–MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. Contact: ScalbertA@iarc.fr PMID:25348215

  16. Automatic transfer function generation for volume rendering of high-resolution x-ray 3D digital mammography images

    NASA Astrophysics Data System (ADS)

    Alyassin, Abdal M.

    2002-05-01

    3D Digital mammography (3DDM) is a new technology that provides high resolution X-ray breast tomographic data. Like any other tomographic medical imaging modalities, viewing a stack of tomographic images may require time especially if the images are of large matrix size. In addition, it may cause difficulty to conceptually construct 3D breast structures. Therefore, there is a need to readily visualize the data in 3D. However, one of the issues that hinder the usage of volume rendering (VR) is finding an automatic way to generate transfer functions that efficiently map the important diagnostic information in the data. We have developed a method that randomly samples the volume. Based on the mean and the standard deviation of these samples, the technique determines the lower limit and upper limit of a piecewise linear ramp transfer function. We have volume rendered several 3DDM data using this technique and compared visually the outcome with the result from a conventional automatic technique. The transfer function generated through the proposed technique provided superior VR images over the conventional technique. Furthermore, the improvement in the reproducibility of the transfer function correlated with the number of samples taken from the volume at the expense of the processing time.

  17. Automatic and manual segmentation of healthy retinas using high-definition optical coherence tomography.

    PubMed

    Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe

    2011-03-01

    This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.

  18. Automatic Focusing for a 675 GHz Imaging Radar with Target Standoff Distances from 14 to 34 Meters

    NASA Technical Reports Server (NTRS)

    Tang, Adrian; Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Siegel, Peter H.

    2013-01-01

    This paper dicusses the issue of limited focal depth for high-resolution imaging radar operating over a wide range of standoff distances. We describe a technique for automatically focusing a THz imaging radar system using translational optics combined with range estimation based on a reduced chirp bandwidth setting. The demonstarted focusing algorithm estimates the correct focal depth for desired targets in the field of view at unknown standoffs and in the presence of clutter to provide good imagery at 14 to 30 meters of standoff.

  19. Potential and limitations of webcam images for snow cover monitoring in the Swiss Alps

    NASA Astrophysics Data System (ADS)

    Dizerens, Céline; Hüsler, Fabia; Wunderle, Stefan

    2017-04-01

    In Switzerland, several thousands of outdoor webcams are currently connected to the Internet. They deliver freely available images that can be used to analyze snow cover variability on a high spatio-temporal resolution. To make use of this big data source, we have implemented a webcam-based snow cover mapping procedure, which allows to almost automatically derive snow cover maps from such webcam images. As there is mostly no information about the webcams and its parameters available, our registration approach automatically resolves these parameters (camera orientation, principal point, field of view) by using an estimate of the webcams position, the mountain silhouette, and a high-resolution digital elevation model (DEM). Combined with an automatic snow classification and an image alignment using SIFT features, our procedure can be applied to arbitrary images to generate snow cover maps with a minimum of effort. Resulting snow cover maps have the same resolution as the digital elevation model and indicate whether each grid cell is snow-covered, snow-free, or hidden from webcams' positions. Up to now, we processed images of about 290 webcams from our archive, and evaluated images of 20 webcams using manually selected ground control points (GCPs) to evaluate the mapping accuracy of our procedure. We present methodological limitations and ongoing improvements, show some applications of our snow cover maps, and demonstrate that webcams not only offer a great opportunity to complement satellite-derived snow retrieval under cloudy conditions, but also serve as a reference for improved validation of satellite-based approaches.

  20. Automated full-3D digitization system for documentation of paintings

    NASA Astrophysics Data System (ADS)

    Karaszewski, Maciej; Adamczyk, Marcin; Sitnik, Robert; Michoński, Jakub; Załuski, Wojciech; Bunsch, Eryk; Bolewicki, Paweł

    2013-05-01

    In this paper, a fully automated 3D digitization system for documentation of paintings is presented. It consists of a specially designed frame system for secure fixing of painting, a custom designed, structured light-based, high-resolution measurement head with no IR and UV emission. This device is automatically positioned in two axes (parallel to the surface of digitized painting) with additional manual positioning in third, perpendicular axis. Manual change of observation angle is also possible around two axes to re-measure even partially shadowed areas. The whole system is built in a way which provides full protection of digitized object (moving elements cannot reach its vicinity) and is driven by computer-controlled, highly precise servomechanisms. It can be used for automatic (without any user attention) and fast measurement of the paintings with some limitation to their properties: maximum size of the picture is 2000mm x 2000mm (with deviation of flatness smaller than 20mm) Measurement head is automatically calibrated by the system and its possible working volume starts from 50mm x 50mm x 20mm (10000 points per square mm) and ends at 120mm x 80mm x 60mm (2500 points per square mm). The directional measurements obtained with this system are automatically initially aligned due to the measurement head's position coordinates known from servomechanisms. After the whole painting is digitized, the measurements are fine-aligned with color-based ICP algorithm to remove any influence of possible inaccuracy of positioning devices. We present exemplary digitization results along with the discussion about the opportunities of analysis which appear for such high-resolution, 3D computer models of paintings.

  1. Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map.

    PubMed

    Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen

    2015-09-11

    This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate.

  2. INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL

    EPA Science Inventory

    The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...

  3. Defect inspection and printability study for 14 nm node and beyond photomask

    NASA Astrophysics Data System (ADS)

    Seki, Kazunori; Yonetani, Masashi; Badger, Karen; Dechene, Dan J.; Akima, Shinji

    2016-10-01

    Two different mask inspection techniques are developed and compared for 14 nm node and beyond photomasks, High resolution and Litho-based inspection. High resolution inspection is the general inspection method in which a 19x nm wavelength laser is used with the High NA inspection optics. Litho-based inspection is a new inspection technology. This inspection uses the wafer lithography information, and as such, this method has automatic defect classification capability which is based on wafer printability. Both High resolution and Litho-based inspection methods are compared using 14 nm and 7 nm node programmed defect and production design masks. The defect sensitivity and mask inspectability is compared, in addition to comparing the defect classification and throughput. Additionally, the Cost / Infrastructure comparison is analyzed and the impact of each inspection method is discussed.

  4. Automatic anterior chamber angle assessment for HD-OCT images.

    PubMed

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Wong, Hong-Tym; Aung, Tin

    2011-11-01

    Angle-closure glaucoma is a major blinding eye disease and could be detected by measuring the anterior chamber angle in the human eyes. High-definition OCT (Cirrus HD-OCT) is an emerging noninvasive, high-speed, and high-resolution imaging modality for the anterior segment of the eye. Here, we propose a novel algorithm which automatically detects a new landmark, Schwalbe's line, and measures the anterior chamber angle in the HD-OCT images. The distortion caused by refraction is corrected by dewarping the HD-OCT images, and three biometric measurements are defined to quantitatively assess the anterior chamber angle. The proposed algorithm was tested on 40 HD-OCT images of the eye and provided accurate measurements in about 1 second.

  5. A cloud mask methodology for high resolution remote sensing data combining information from high and medium resolution optical sensors

    NASA Astrophysics Data System (ADS)

    Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús

    2011-09-01

    This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.

  6. Automatic extraction of road features in urban environments using dense ALS data

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  7. Automated in vivo 3D high-definition optical coherence tomography skin analysis system.

    PubMed

    Ai Ping Yow; Jun Cheng; Annan Li; Srivastava, Ruchir; Jiang Liu; Wong, Damon Wing Kee; Hong Liang Tey

    2016-08-01

    The in vivo assessment and visualization of skin structures can be performed through the use of high resolution optical coherence tomography imaging, also known as HD-OCT. However, the manual assessment of such images can be exhaustive and time consuming. In this paper, we present an analysis system to automatically identify and quantify the skin characteristics such as the topography of the surface of the skin and thickness of the epidermis in HD-OCT images. Comparison of this system with manual clinical measurements demonstrated its potential for automatic objective skin analysis and diseases diagnosis. To our knowledge, this is the first report of an automated system to process and analyse HD-OCT skin images.

  8. Fast Spatio-Temporal Data Mining from Large Geophysical Datasets

    NASA Technical Reports Server (NTRS)

    Stolorz, P.; Mesrobian, E.; Muntz, R.; Santos, J. R.; Shek, E.; Yi, J.; Mechoso, C.; Farrara, J.

    1995-01-01

    Use of the UCLA CONQUEST (CONtent-based Querying in Space and Time) is reviewed for performance of automatic cyclone extraction and detection of spatio-temporal blocking conditions on MPP. CONQUEST is a data analysis environment for knowledge and data mining to aid in high-resolution modeling of climate modeling.

  9. HIPS: A new hippocampus subfield segmentation method.

    PubMed

    Romero, José E; Coupé, Pierrick; Manjón, José V

    2017-12-01

    The importance of the hippocampus in the study of several neurodegenerative diseases such as Alzheimer's disease makes it a structure of great interest in neuroimaging. However, few segmentation methods have been proposed to measure its subfields due to its complex structure and the lack of high resolution magnetic resonance (MR) data. In this work, we present a new pipeline for automatic hippocampus subfield segmentation using two available hippocampus subfield delineation protocols that can work with both high and standard resolution data. The proposed method is based on multi-atlas label fusion technology that benefits from a novel multi-contrast patch match search process (using high resolution T1-weighted and T2-weighted images). The proposed method also includes as post-processing a new neural network-based error correction step to minimize systematic segmentation errors. The method has been evaluated on both high and standard resolution images and compared to other state-of-the-art methods showing better results in terms of accuracy and execution time. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Hybrid Automatic Building Interpretation System

    NASA Astrophysics Data System (ADS)

    Pakzad, K.; Klink, A.; Müterthies, A.; Gröger, G.; Stroh, V.; Plümer, L.

    2011-09-01

    HABIS (Hybrid Automatic Building Interpretation System) is a system for an automatic reconstruction of building roofs used in virtual 3D building models. Unlike most of the commercially available systems, HABIS is able to work to a high degree automatically. The hybrid method uses different sources intending to exploit the advantages of the particular sources. 3D point clouds usually provide good height and surface data, whereas spatial high resolution aerial images provide important information for edges and detail information for roof objects like dormers or chimneys. The cadastral data provide important basis information about the building ground plans. The approach used in HABIS works with a multi-stage-process, which starts with a coarse roof classification based on 3D point clouds. After that it continues with an image based verification of these predicted roofs. In a further step a final classification and adjustment of the roofs is done. In addition some roof objects like dormers and chimneys are also extracted based on aerial images and added to the models. In this paper the used methods are described and some results are presented.

  11. High-resolution magnetic resonance imaging reveals nuclei of the human amygdala: manual segmentation to automatic atlas.

    PubMed

    Saygin, Z M; Kliemann, D; Iglesias, J E; van der Kouwe, A J W; Boyd, E; Reuter, M; Stevens, A; Van Leemput, K; McKee, A; Frosch, M P; Fischl, B; Augustinack, J C

    2017-07-15

    The amygdala is composed of multiple nuclei with unique functions and connections in the limbic system and to the rest of the brain. However, standard in vivo neuroimaging tools to automatically delineate the amygdala into its multiple nuclei are still rare. By scanning postmortem specimens at high resolution (100-150µm) at 7T field strength (n = 10), we were able to visualize and label nine amygdala nuclei (anterior amygdaloid, cortico-amygdaloid transition area; basal, lateral, accessory basal, central, cortical medial, paralaminar nuclei). We created an atlas from these labels using a recently developed atlas building algorithm based on Bayesian inference. This atlas, which will be released as part of FreeSurfer, can be used to automatically segment nine amygdala nuclei from a standard resolution structural MR image. We applied this atlas to two publicly available datasets (ADNI and ABIDE) with standard resolution T1 data, used individual volumetric data of the amygdala nuclei as the measure and found that our atlas i) discriminates between Alzheimer's disease participants and age-matched control participants with 84% accuracy (AUC=0.915), and ii) discriminates between individuals with autism and age-, sex- and IQ-matched neurotypically developed control participants with 59.5% accuracy (AUC=0.59). For both datasets, the new ex vivo atlas significantly outperformed (all p < .05) estimations of the whole amygdala derived from the segmentation in FreeSurfer 5.1 (ADNI: 75%, ABIDE: 54% accuracy), as well as classification based on whole amygdala volume (using the sum of all amygdala nuclei volumes; ADNI: 81%, ABIDE: 55% accuracy). This new atlas and the segmentation tools that utilize it will provide neuroimaging researchers with the ability to explore the function and connectivity of the human amygdala nuclei with unprecedented detail in healthy adults as well as those with neurodevelopmental and neurodegenerative disorders. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Application of Multilayer Perceptron with Automatic Relevance Determination on Weed Mapping Using UAV Multispectral Imagery

    PubMed Central

    Tamouridou, Afroditi A.; Lagopodi, Anastasia L.; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios

    2017-01-01

    Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery. PMID:29019957

  13. Dispersion-cancelled biological imaging with quantum-inspired interferometry

    PubMed Central

    Mazurek, M. D.; Schreiter, K. M.; Prevedel, R.; Kaltenbaek, R.; Resch, K. J.

    2013-01-01

    Quantum information science promises transformative impact over a range of key technologies in computing, communication, and sensing. A prominent example uses entangled photons to overcome the resolution-degrading effects of dispersion in the medical-imaging technology, optical coherence tomography. The quantum solution introduces new challenges: inherently low signal and artifacts, additional unwanted signal features. It has recently been shown that entanglement is not a requirement for automatic dispersion cancellation. Such classical techniques could solve the low-signal problem, however they all still suffer from artifacts. Here, we introduce a method of chirped-pulse interferometry based on shaped laser pulses, and use it to produce artifact-free, high-resolution, dispersion-cancelled images of the internal structure of a biological sample. Our work fulfills one of the promises of quantum technologies: automatic-dispersion-cancellation interferometry in biomedical imaging. It also shows how subtle differences between a quantum technique and its classical analogue may have unforeseen, yet beneficial, consequences. PMID:23545597

  14. Application of Multilayer Perceptron with Automatic Relevance Determination on Weed Mapping Using UAV Multispectral Imagery.

    PubMed

    Tamouridou, Afroditi A; Alexandridis, Thomas K; Pantazi, Xanthoula E; Lagopodi, Anastasia L; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios

    2017-10-11

    Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery.

  15. Automatic panoramic thermal integrated sensor

    NASA Astrophysics Data System (ADS)

    Gutin, Mikhail A.; Tsui, Eddy K.; Gutin, Olga N.

    2005-05-01

    Historically, the US Army has recognized the advantages of panoramic imagers with high image resolution: increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The novel ViperViewTM high-resolution panoramic thermal imager is the heart of the Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) in support of the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to improve situational awareness (SA) in many defense and offensive operations, as well as serve as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The ViperView is as an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS sensor suite include ancillary sensors, advanced power management, and wakeup capability. This paper describes the development status of the APTIS system.

  16. Urban Density Indices Using Mean Shift-Based Upsampled Elevetion Data

    NASA Astrophysics Data System (ADS)

    Charou, E.; Gyftakis, S.; Bratsolis, E.; Tsenoglou, T.; Papadopoulou, Th. D.; Vassilas, N.

    2015-04-01

    Urban density is an important factor for several fields, e.g. urban design, planning and land management. Modern remote sensors deliver ample information for the estimation of specific urban land classification classes (2D indicators), and the height of urban land classification objects (3D indicators) within an Area of Interest (AOI). In this research, two of these indicators, Building Coverage Ratio (BCR) and Floor Area Ratio (FAR) are numerically and automatically derived from high-resolution airborne RGB orthophotos and LiDAR data. In the pre-processing step the low resolution elevation data are fused with the high resolution optical data through a mean-shift based discontinuity preserving smoothing algorithm. The outcome is an improved normalized digital surface model (nDSM) is an upsampled elevation data with considerable improvement regarding region filling and "straightness" of elevation discontinuities. In a following step, a Multilayer Feedforward Neural Network (MFNN) is used to classify all pixels of the AOI to building or non-building categories. For the total surface of the block and the buildings we consider the number of their pixels and the surface of the unit pixel. Comparisons of the automatically derived BCR and FAR indicators with manually derived ones shows the applicability and effectiveness of the methodology proposed.

  17. The Gaia FGK benchmark stars. High resolution spectral library

    NASA Astrophysics Data System (ADS)

    Blanco-Cuaresma, S.; Soubiran, C.; Jofré, P.; Heiter, U.

    2014-06-01

    Context. An increasing number of high-resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims: We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK benchmark stars) that will allow us to assess stellar analysis methods and calibrate spectroscopic surveys. Methods: High-resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process to homogenize the observed data and assess the quality of the resulting library. Results: We built a high-quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and ensures reproducibility. Additionally, it allows us to quickly adapt the library to specific needs that can arise from future spectroscopic analyses. Based on NARVAL and HARPS data obtained within the Gaia Data Processing and Analysis Consortium (DPAC) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group, and on data retrieved from the ESO-ADP database.The library of spectra is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A98

  18. Report of the President’s Task Force on Aircraft Crew Complement

    DTIC Science & Technology

    1981-07-02

    ALPA - Air Line Pilots Association APA - Allied Pilots Association ASRS Aviation Safety Reporting System ATARS Automatic Traffic Advisory and...capability significantly. The complementary Automatic Traffic Advisory and Resolution Service ( ATARS ) will provide collision avoidance advisories and...resolution. The main purpose of DABS/ ATARS is to detect traffic and to provide aircraft escape- maneuver advisories in adjoining ATC sectors. G/A pilots

  19. Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map

    PubMed Central

    Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen

    2015-01-01

    This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate. PMID:26378543

  20. Rules of engagement: incomplete and complete pronoun resolution.

    PubMed

    Love, Jessica; McKoon, Gail

    2011-07-01

    Research on shallow processing suggests that readers sometimes encode only a superficial representation of a text and fail to make use of all available information. Greene, McKoon, and Ratcliff (1992) extended this work to pronouns, finding evidence that readers sometimes fail to automatically identify referents even when these are unambiguous. In this paper we revisit those findings. In 11 recognition probe, priming, and self-report experiments, we manipulated Greene et al.'s stories to discover under what circumstances a pronoun's referent is automatically understood. We lengthened the stories from 4 to 8 lines. This simple manipulation led to automatic and correct resolution, which we attribute to readers' increased engagement with the stories. We found evidence of resolution even when the additional text did not mention the pronoun's referent. In addition, our results suggest that the pronoun temporarily boosts the referent's accessibility, an advantage that disappears by the end of the next sentence. Finally, we present evidence from memory experiments that supports complete pronoun resolution for the longer but not the shorter stories.

  1. Road Network Extraction from Dsm by Mathematical Morphology and Reasoning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Jianliang; Zhu, Lin; Tachibana, Kikuo

    2016-06-01

    The objective of this research is the automatic extraction of the road network in a scene of the urban area from a high resolution digital surface model (DSM). Automatic road extraction and modeling from remote sensed data has been studied for more than one decade. The methods vary greatly due to the differences of data types, regions, resolutions et al. An advanced automatic road network extraction scheme is proposed to address the issues of tedium steps on segmentation, recognition and grouping. It is on the basis of a geometric road model which describes a multiple-level structure. The 0-dimension element is intersection. The 1-dimension elements are central line and side. The 2-dimension element is plane, which is generated from the 1-dimension elements. The key feature of the presented approach is the cross validation for the three road elements which goes through the entire procedure of their extraction. The advantage of our model and method is that linear elements of the road can be derived directly, without any complex, non-robust connection hypothesis. An example of Japanese scene is presented to display the procedure and the performance of the approach.

  2. Automatic 3D Segmentation and Quantification of Lenticulostriate Arteries from High-Resolution 7 Tesla MRA Images.

    PubMed

    Wei Liao; Rohr, Karl; Chang-Ki Kang; Zang-Hee Cho; Worz, Stefan

    2016-01-01

    We propose a novel hybrid approach for automatic 3D segmentation and quantification of high-resolution 7 Tesla magnetic resonance angiography (MRA) images of the human cerebral vasculature. Our approach consists of two main steps. First, a 3D model-based approach is used to segment and quantify thick vessels and most parts of thin vessels. Second, remaining vessel gaps of the first step in low-contrast and noisy regions are completed using a 3D minimal path approach, which exploits directional information. We present two novel minimal path approaches. The first is an explicit approach based on energy minimization using probabilistic sampling, and the second is an implicit approach based on fast marching with anisotropic directional prior. We conducted an extensive evaluation with over 2300 3D synthetic images and 40 real 3D 7 Tesla MRA images. Quantitative and qualitative evaluation shows that our approach achieves superior results compared with a previous minimal path approach. Furthermore, our approach was successfully used in two clinical studies on stroke and vascular dementia.

  3. Automatic landslide detection from LiDAR DTM derivatives by geographic-object-based image analysis based on open-source software

    NASA Astrophysics Data System (ADS)

    Knevels, Raphael; Leopold, Philip; Petschko, Helene

    2017-04-01

    With high-resolution airborne Light Detection and Ranging (LiDAR) data more commonly available, many studies have been performed to facilitate the detailed information on the earth surface and to analyse its limitation. Specifically in the field of natural hazards, digital terrain models (DTM) have been used to map hazardous processes such as landslides mainly by visual interpretation of LiDAR DTM derivatives. However, new approaches are striving towards automatic detection of landslides to speed up the process of generating landslide inventories. These studies usually use a combination of optical imagery and terrain data, and are designed in commercial software packages such as ESRI ArcGIS, Definiens eCognition, or MathWorks MATLAB. The objective of this study was to investigate the potential of open-source software for automatic landslide detection based only on high-resolution LiDAR DTM derivatives in a study area within the federal state of Burgenland, Austria. The study area is very prone to landslides which have been mapped with different methodologies in recent years. The free development environment R was used to integrate open-source geographic information system (GIS) software, such as SAGA (System for Automated Geoscientific Analyses), GRASS (Geographic Resources Analysis Support System), or TauDEM (Terrain Analysis Using Digital Elevation Models). The implemented geographic-object-based image analysis (GEOBIA) consisted of (1) derivation of land surface parameters, such as slope, surface roughness, curvature, or flow direction, (2) finding optimal scale parameter by the use of an objective function, (3) multi-scale segmentation, (4) classification of landslide parts (main scarp, body, flanks) by k-mean thresholding, (5) assessment of the classification performance using a pre-existing landslide inventory, and (6) post-processing analysis for the further use in landslide inventories. The results of the developed open-source approach demonstrated good success rates to objectively detect landslides in high-resolution topography data by GEOBIA.

  4. Recent advances in automatic alignment system for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Wilhelmsen, Karl; Awwal, Abdul A. S.; Kalantar, Dan; Leach, Richard; Lowe-Webb, Roger; McGuigan, David; Miller Kamm, Vicki

    2011-03-01

    The automatic alignment system for the National Ignition Facility (NIF) is a large-scale parallel system that directs all 192 laser beams along the 300-m optical path to a 50-micron focus at target chamber in less than 50 minutes. The system automatically commands 9,000 stepping motors to adjust mirrors and other optics based upon images acquired from high-resolution digital cameras viewing beams at various locations. Forty-five control loops per beamline request image processing services running on a LINUX cluster to analyze these images of the beams and references, and automatically steer the beams toward the target. This paper discusses the upgrades to the NIF automatic alignment system to handle new alignment needs and evolving requirements as related to various types of experiments performed. As NIF becomes a continuously-operated system and more experiments are performed, performance monitoring is increasingly important for maintenance and commissioning work. Data, collected during operations, is analyzed for tuning of the laser and targeting maintenance work. Handling evolving alignment and maintenance needs is expected for the planned 30-year operational life of NIF.

  5. The Coordinate Transformation Method of High Resolution dem Data

    NASA Astrophysics Data System (ADS)

    Yan, Chaode; Guo, Wang; Li, Aimin

    2018-04-01

    Coordinate transformation methods of DEM data can be divided into two categories. One reconstruct based on original vector elevation data. The other transforms DEM data blocks by transforming parameters. But the former doesn't work in the absence of original vector data, and the later may cause errors at joint places between adjoining blocks of high resolution DEM data. In view of this problem, a method dealing with high resolution DEM data coordinate transformation is proposed. The method transforms DEM data into discrete vector elevation points, and then adjusts positions of points by bi-linear interpolation respectively. Finally, a TIN is generated by transformed points, and the new DEM data in target coordinate system is reconstructed based on TIN. An algorithm which can find blocks and transform automatically is given in this paper. The method is tested in different terrains and proved to be feasible and valid.

  6. Almaz

    NASA Technical Reports Server (NTRS)

    Viter, V.

    1993-01-01

    The basic data of the automatic space station ALMAZ-1B is overviewed, including the orbit parameters and maximum power. The principal technical characteristics of its remote sensing equipment is listed for the synthetic aperture and side-looking radar, optoelectronic equipment for stereophotography, high-resolution electronic scanner, middle-resolution optomechanical scanner, spectroradiometer for ocean satellite monitoring, and information transmission and reception. The main objectives and uses of the ALMAZ-1B information are cartography, land monitoring, geology, ecological monitoring, oceanology, pilotage, fishery, and information supply during an emergency such as controlling situation in natural disasters.

  7. Ultrasonic Ranging System With Increased Resolution

    NASA Technical Reports Server (NTRS)

    Meyer, William E.; Johnson, William G.

    1987-01-01

    Master-oscillator frequency increased. Ultrasonic range-measuring system with 0.1-in. resolution provides continuous digital display of four distance readings, each updated four times per second. Four rangefinder modules in system are modified versions of rangefinder used for automatic focusing in commercial series of cameras. Ultrasonic pulses emitted by system innocuous to both people and equipment. Provides economical solutions to such distance-measurement problems as posed by boats approaching docks, truck backing toward loading platform, runway-clearance readout for tail of airplane with high angle attack, or burglar alarm.

  8. Computing 3-D steady supersonic flow via a new Lagrangian approach

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Liou, M.-S.

    1993-01-01

    The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.

  9. Creep Measurement Video Extensometer

    NASA Technical Reports Server (NTRS)

    Jaster, Mark; Vickerman, Mary; Padula, Santo, II; Juhas, John

    2011-01-01

    Understanding material behavior under load is critical to the efficient and accurate design of advanced aircraft and spacecraft. Technologies such as the one disclosed here allow accurate creep measurements to be taken automatically, reducing error. The goal was to develop a non-contact, automated system capable of capturing images that could subsequently be processed to obtain the strain characteristics of these materials during deformation, while maintaining adequate resolution to capture the true deformation response of the material. The measurement system comprises a high-resolution digital camera, computer, and software that work collectively to interpret the image.

  10. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  11. The Infrared Automatic Mass Screening (IRAMS) System For Printed Circuit Board Fault Detection

    NASA Astrophysics Data System (ADS)

    Hugo, Perry W.

    1987-05-01

    Office of the Program Manager for TMDE (OPM TMDE) has initiated a program to develop techniques for evaluating the performance of printed circuit boards (PCB's) using infrared thermal imaging. It is OPM TMDE's expectation that the standard thermal profile (STP) will become the basis for the future rapid automatic detection and isolation of gross failure mechanisms on units under test (UUT's). To accomplish this OPM TMDE has purchased two Infrared Automatic Mass Screening ( I RAMS) systems which are scheduled for delivery in 1987. The IRAMS system combines a high resolution infrared thermal imager with a test bench and diagnostic computer hardware and software. Its purpose is to rapidly and automatically compare the thermal profiles of a UUT with the STP of that unit, recalled from memory, in order to detect thermally responsive failure mechanisms in PCB's. This paper will review the IRAMS performance requirements, outline the plan for implementing the two systems and report on progress to date.

  12. Automatic updating and 3D modeling of airport information from high resolution images using GIS and LIDAR data

    NASA Astrophysics Data System (ADS)

    Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng

    2007-11-01

    As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.

  13. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  14. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  15. Remote Sensing Analysis of Forest Disturbances

    NASA Technical Reports Server (NTRS)

    Asner, Gregory P. (Inventor)

    2015-01-01

    The present invention provides systems and methods to automatically analyze Landsat satellite data of forests. The present invention can easily be used to monitor any type of forest disturbance such as from selective logging, agriculture, cattle ranching, natural hazards (fire, wind events, storms), etc. The present invention provides a large-scale, high-resolution, automated remote sensing analysis of such disturbances.

  16. Remote sensing analysis of forest disturbances

    NASA Technical Reports Server (NTRS)

    Asner, Gregory P. (Inventor)

    2012-01-01

    The present invention provides systems and methods to automatically analyze Landsat satellite data of forests. The present invention can easily be used to monitor any type of forest disturbance such as from selective logging, agriculture, cattle ranching, natural hazards (fire, wind events, storms), etc. The present invention provides a large-scale, high-resolution, automated remote sensing analysis of such disturbances.

  17. Automatic NMR field-frequency lock-pulsed phase locked loop approach.

    PubMed

    Kan, S; Gonord, P; Fan, M; Sauzade, M; Courtieu, J

    1978-06-01

    A self-contained deuterium frequency-field lock scheme for a high-resolution NMR spectrometer is described. It is based on phase locked loop techniques in which the free induction decay signal behaves as a voltage-controlled oscillator. By pulsing the spins at an offset frequency of a few hundred hertz and using a digital phase-frequency discriminator this method not only eliminates the usual phase, rf power, offset adjustments needed in conventional lock systems but also possesses the automatic pull-in characteristics that dispense with the use of field sweeps to locate the NMR line prior to closure of the lock loop.

  18. FIEStool: Automated data reduction for FIber-fed Echelle Spectrograph (FIES)

    NASA Astrophysics Data System (ADS)

    Stempels, Eric; Telting, John

    2017-08-01

    FIEStool automatically reduces data obtained with the FIber-fed Echelle Spectrograph (FIES) at the Nordic Optical Telescope, a high-resolution spectrograph available on a stand-by basis, while also allowing the basic properties of the reduction to be controlled in real time by the user. It provides a Graphical User Interface and offers bias subtraction, flat-fielding, scattered-light subtraction, and specialized reduction tasks from the external packages IRAF (ascl:9911.002) and NumArray. The core of FIEStool is instrument-independent; the software, written in Python, could with minor modifications also be used for automatic reduction of data from other instruments.

  19. Rules of Engagement: Incomplete and Complete Pronoun Resolution

    PubMed Central

    Love, Jessica; McKoon, Gail

    2011-01-01

    Research on shallow processing suggests that readers sometimes encode only a superficial representation of a text, failing to make use of all available information. Greene, McKoon and Ratcliff (1992) extended this work to pronouns, finding evidence that readers sometimes fail to automatically identify referents even when they are unambiguous. In this paper we revisit those findings. In 11 recognition probe, priming, and self-report experiments, we manipulated Greene et al.’s stories to discover under what circumstances a pronoun’s referent is automatically understood. We lengthened the stories from four to eight lines, a simple manipulation that led to automatic and correct resolution, which we attribute to readers’ increased engagement with the stories. We found evidence of resolution even when the additional text did not mention the pronoun’s referent. In addition, our results suggest that the pronoun temporarily boosts the referent’s accessibility, an advantage that disappears by the end of the next sentence. Finally, we present evidence from memory experiments that support complete pronoun resolution for the longer, but not the shorter, stories. PMID:21480757

  20. Application of the high resolution return beam vidicon

    NASA Technical Reports Server (NTRS)

    Cantella, M. J.

    1977-01-01

    The Return Beam Vidicon (RBV) is a high-performance electronic image sensor and electrical storage component. It can accept continuous or discrete exposures. Information can be read out with a single scan or with many repetitive scans for either signal processing or display. Resolution capability is 10,000 TV lines/height, and at 100 lp/mm, performance matches or exceeds that of film, particularly with low-contrast imagery. Electronic zoom can be employed effectively for image magnification and data compression. The high performance and flexibility of the RBV permit wide application in systems for reconnaissance, scan conversion, information storage and retrieval, and automatic inspection and test. This paper summarizes the characteristics and performance parameters of the RBV and cites examples of feasible applications.

  1. Development of High Sensitivity Nuclear Emulsion and Fine Grained Emulsion

    NASA Astrophysics Data System (ADS)

    Kawahara, H.; Asada, T.; Naka, T.; Naganawa, N.; Kuwabara, K.; Nakamura, M.

    2014-08-01

    Nuclear emulsion is a particle detector having high spacial resolution and angular resolution. It became useful for large statistics experiment thanks to the development of automatic scanning system. In 2010, a facility for emulsion production was introduced and R&D of nuclear emulsion began at Nagoya university. In this paper, we present results of development of the high sensitivity emulsion and fine grained emulsion for dark matter search experiment. Improvement of sensitivity is achieved by raising density of silver halide crystals and doping well-adjusted amount of chemicals. Production of fine grained emulsion was difficult because of unexpected crystal condensation. By mixing polyvinyl alcohol (PVA) to gelatin as a binder, we succeeded in making a stable fine grained emulsion.

  2. Initial Experience With Ultra High-Density Mapping of Human Right Atria.

    PubMed

    Bollmann, Andreas; Hilbert, Sebastian; John, Silke; Kosiuk, Jedrzej; Hindricks, Gerhard

    2016-02-01

    Recently, an automatic, high-resolution mapping system has been presented to accurately and quickly identify right atrial geometry and activation patterns in animals, but human data are lacking. This study aims to assess the clinical feasibility and accuracy of high-density electroanatomical mapping of various RA arrhythmias. Electroanatomical maps of the RA (35 partial and 24 complete) were created in 23 patients using a novel mini-basket catheter with 64 electrodes and automatic electrogram annotation. Median acquisition time was 6:43 minutes (0:39-23:05 minutes) with shorter times for partial (4.03 ± 4.13 minutes) than for complete maps (9.41 ± 4.92 minutes). During mapping 3,236 (710-16,306) data points were automatically annotated without manual correction. Maps obtained during sinus rhythm created geometry consistent with CT imaging and demonstrated activation originating at the middle to superior crista terminalis, while maps during CS pacing showed right atrial activation beginning at the infero-septal region. Activation patterns were consistent with cavotricuspid isthmus-dependent atrial flutter (n = 4), complex reentry tachycardia (n = 1), or ectopic atrial tachycardia (n = 2). His bundle and fractionated potentials in the slow pathway region were automatically detected in all patients. Ablation of the cavotricuspid isthmus (n = 9), the atrio-ventricular node (n = 2), atrial ectopy (n = 2), and the slow pathway (n = 3) was successfully and safely performed. RA mapping with this automatic high-density mapping system is fast, feasible, and safe. It is possible to reproducibly identify propagation of atrial activation during sinus rhythm, various tachycardias, and also complex reentrant arrhythmias. © 2015 Wiley Periodicals, Inc.

  3. A distributed automatic target recognition system using multiple low resolution sensors

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj

    2008-04-01

    In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.

  4. Resolution Enhanced Magnetic Sensing System for Wide Coverage Real Time UXO Detection

    NASA Astrophysics Data System (ADS)

    Zalevsky, Zeev; Bregman, Yuri; Salomonski, Nizan; Zafrir, Hovav

    2012-09-01

    In this paper we present a new high resolution automatic detection algorithm based upon a Wavelet transform and then validate it in marine related experiments. The proposed approach allows obtaining an automatic detection in a very low signal to noise ratios. The amount of calculations is reduced, the magnetic trend is depressed and the probability of detection/ false alarm rate can easily be controlled. Moreover, the algorithm enables to distinguish between close targets. In the algorithm we use the physical dependence of the magnetic field of a magnetic dipole in order to define a Wavelet mother function that later on can detect magnetic targets modeled as dipoles and embedded in noisy surrounding, at improved resolution. The proposed algorithm was realized on synthesized targets and then validated in field experiments involving a marine surface-floating system for wide coverage real time unexploded ordinance (UXO) detection and mapping. The detection probability achieved in the marine experiment was above 90%. The horizontal radial error of most of the detected targets was only 16 m and two baseline targets that were immersed about 20 m one to another could easily be distinguished.

  5. Object-oriented recognition of high-resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan

    2016-01-01

    With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .

  6. Image matching as a data source for forest inventory - Comparison of Semi-Global Matching and Next-Generation Automatic Terrain Extraction algorithms in a typical managed boreal forest environment

    NASA Astrophysics Data System (ADS)

    Kukkonen, M.; Maltamo, M.; Packalen, P.

    2017-08-01

    Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.

  7. The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences

    NASA Astrophysics Data System (ADS)

    Schwalbe, Ellen; Maas, Hans-Gerd

    2017-12-01

    This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.

  8. Deriving high-resolution protein backbone structure propensities from all crystal data using the information maximization device.

    PubMed

    Solis, Armando D

    2014-01-01

    The most informative probability distribution functions (PDFs) describing the Ramachandran phi-psi dihedral angle pair, a fundamental descriptor of backbone conformation of protein molecules, are derived from high-resolution X-ray crystal structures using an information-theoretic approach. The Information Maximization Device (IMD) is established, based on fundamental information-theoretic concepts, and then applied specifically to derive highly resolved phi-psi maps for all 20 single amino acid and all 8000 triplet sequences at an optimal resolution determined by the volume of current data. The paper shows that utilizing the latent information contained in all viable high-resolution crystal structures found in the Protein Data Bank (PDB), totaling more than 77,000 chains, permits the derivation of a large number of optimized sequence-dependent PDFs. This work demonstrates the effectiveness of the IMD and the superiority of the resulting PDFs by extensive fold recognition experiments and rigorous comparisons with previously published triplet PDFs. Because it automatically optimizes PDFs, IMD results in improved performance of knowledge-based potentials, which rely on such PDFs. Furthermore, it provides an easy computational recipe for empirically deriving other kinds of sequence-dependent structural PDFs with greater detail and precision. The high-resolution phi-psi maps derived in this work are available for download.

  9. Nested Machine Learning Facilitates Increased Sequence Content for Large-Scale Automated High Resolution Melt Genotyping

    PubMed Central

    Fraley, Stephanie I.; Athamanolap, Pornpat; Masek, Billie J.; Hardick, Justin; Carroll, Karen C.; Hsieh, Yu-Hsiang; Rothman, Richard E.; Gaydos, Charlotte A.; Wang, Tza-Huei; Yang, Samuel

    2016-01-01

    High Resolution Melt (HRM) is a versatile and rapid post-PCR DNA analysis technique primarily used to differentiate sequence variants among only a few short amplicons. We recently developed a one-vs-one support vector machine algorithm (OVO SVM) that enables the use of HRM for identifying numerous short amplicon sequences automatically and reliably. Herein, we set out to maximize the discriminating power of HRM + SVM for a single genetic locus by testing longer amplicons harboring significantly more sequence information. Using universal primers that amplify the hypervariable bacterial 16 S rRNA gene as a model system, we found that long amplicons yield more complex HRM curve shapes. We developed a novel nested OVO SVM approach to take advantage of this feature and achieved 100% accuracy in the identification of 37 clinically relevant bacteria in Leave-One-Out-Cross-Validation. A subset of organisms were independently tested. Those from pure culture were identified with high accuracy, while those tested directly from clinical blood bottles displayed more technical variability and reduced accuracy. Our findings demonstrate that long sequences can be accurately and automatically profiled by HRM with a novel nested SVM approach and suggest that clinical sample testing is feasible with further optimization. PMID:26778280

  10. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  11. Evaluation of satellite remote sensing and automatic data techniques for characterization of wetlands and coastal marshlands. [Atchafalaya River Basin, Louisiana

    NASA Technical Reports Server (NTRS)

    Cartmill, R. H. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The evaluation was conducted in a humid swamp and marsh area of southern Louisiana. ERTS digital multispectral scanner data was compared with similar data gathered by intermediate altitude aircraft. Automatic data processing was applied to several data sets to produce simulated color infrared images, analysis of single bands, thematic maps, and surface classifications. These products were used to determine the effectiveness of satellites to monitor accretion of land, locate aquatic plants, determine water characteristics, and identify marsh and forest species. The results show that to some extent all of these can be done with satellite data. It is most effective for monitoring accretion and least effective in locating aquatic plants. The data sets used show that the ERTS data is superior in mapping quality and accuracy to the aircraft data. However, in some applications requiring high resolution or maximum use of intermittent clear weather conditions, data gathering by aircraft is preferable. Data processing costs for equivalent areas are about three times greater for aircraft data than ERTS data. This is primarily because of the larger volume of data generated by the high resolution aircraft system.

  12. An Evaluation of Feature Learning Methods for High Resolution Image Classification

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Montoya, J.; Schindler, K.

    2012-07-01

    Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.

  13. POPCORN: a Supervisory Control Simulation for Workload and Performance Research

    NASA Technical Reports Server (NTRS)

    Hart, S. G.; Battiste, V.; Lester, P. T.

    1984-01-01

    A multi-task simulation of a semi-automatic supervisory control system was developed to provide an environment in which training, operator strategy development, failure detection and resolution, levels of automation, and operator workload can be investigated. The goal was to develop a well-defined, but realistically complex, task that would lend itself to model-based analysis. The name of the task (POPCORN) reflects the visual display that depicts different task elements milling around waiting to be released and pop out to be performed. The operator's task was to complete each of 100 task elements that ere represented by different symbols, by selecting a target task and entering the desired a command. The simulated automatic system then completed the selected function automatically. Highly significant differences in performance, strategy, and rated workload were found as a function of all experimental manipulations (except reward/penalty).

  14. Large-scale high-resolution non-invasive geophysical archaeological prospection for the investigation of entire archaeological landscapes

    NASA Astrophysics Data System (ADS)

    Trinks, Immo; Neubauer, Wolfgang; Hinterleitner, Alois; Kucera, Matthias; Löcker, Klaus; Nau, Erich; Wallner, Mario; Gabler, Manuel; Zitz, Thomas

    2014-05-01

    Over the past three years the 2010 in Vienna founded Ludwig Boltzmann Institute for Archaeological Prospection and Virtual Archaeology (http://archpro.lbg.ac.at), in collaboration with its ten European partner organizations, has made considerable progress in the development and application of near-surface geophysical survey technology and methodology mapping square kilometres rather than hectares in unprecedented spatial resolution. The use of multiple novel motorized multichannel GPR and magnetometer systems (both Förster/Fluxgate and Cesium type) in combination with advanced and centimetre precise positioning systems (robotic totalstations and Realtime Kinematic GPS) permitting efficient navigation in open fields have resulted in comprehensive blanket coverage archaeological prospection surveys of important cultural heritage sites, such as the landscape surrounding Stonehenge in the framework of the Stonehenge Hidden Landscape Project, the mapping of the World Cultural Heritage site Birka-Hovgården in Sweden, or the detailed investigation of the Roman urban landscape of Carnuntum near Vienna. Efficient state-of-the-art archaeological prospection survey solutions require adequate fieldwork methodologies and appropriate data processing tools for timely quality control of the data in the field and large-scale data visualisations after arrival back in the office. The processed and optimized visualisations of the geophysical measurement data provide the basis for subsequent archaeological interpretation. Integration of the high-resolution geophysical prospection data with remote sensing data acquired through aerial photography, airborne laser- and hyperspectral-scanning, terrestrial laser-scanning or detailed digital terrain models derived through photogrammetric methods permits improved understanding and spatial analysis as well as the preparation of comprehensible presentations for the stakeholders (scientific community, cultural heritage managers, public). Of paramount importance in regard to large-scale high-resolution data acquisition when using motorized survey systems is the exact data positioning as well as the removal of any measurement effects caused by the survey vehicle. The large amount of generated data requires efficient semi-automatic and automatized tools for the extraction and rendering of important information. Semi-automatic data segmentation and classification precede the detailed 3D archaeological interpretation, which still requires considerable manual input. We present the latest technological and methodological developments in regard to motorized near-surface GPR and magnetometer prospection as well as application examples from different iconic European archaeological sites.

  15. Automatic Extraction of High-Resolution Rainfall Series from Rainfall Strip Charts

    NASA Astrophysics Data System (ADS)

    Saa-Requejo, Antonio; Valencia, Jose Luis; Garrido, Alberto; Tarquis, Ana M.

    2015-04-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on a host of factors, including climate, soil, topography, cropping and land management practices among others. Most models for soil erosion or hydrological processes need an accurate storm characterization. However, this data are not always available and in some cases indirect models are generated to fill this gap. In Spain, the rain intensity data known for time periods less than 24 hours back to 1924 and many studies are limited by it. In many cases this data is stored in rainfall strip charts in the meteorological stations but haven't been transfer in a numerical form. To overcome this deficiency in the raw data a process of information extraction from large amounts of rainfall strip charts is implemented by means of computer software. The method has been developed that largely automates the intensive-labour extraction work based on van Piggelen et al. (2011). The method consists of the following five basic steps: 1) scanning the charts to high-resolution digital images, 2) manually and visually registering relevant meta information from charts and pre-processing, 3) applying automatic curve extraction software in a batch process to determine the coordinates of cumulative rainfall lines on the images (main step), 4) post processing the curves that were not correctly determined in step 3, and 5) aggregating the cumulative rainfall in pixel coordinates to the desired time resolution. A colour detection procedure is introduced that automatically separates the background of the charts and rolls from the grid and subsequently the rainfall curve. The rainfall curve is detected by minimization of a cost function. Some utilities have been added to improve the previous work and automates some auxiliary processes: readjust the bands properly, merge bands when those have been scanned in two parts, detect and cut the borders of bands not used (demanded by the software). Also some variations in which colour system is tried basing in HUE or RGB colour have been included. Thanks to apply this digitization rainfall strip charts 209 station-years of three locations in the centre of Spain have been transformed to long-term rainfall time series with 5-min resolution. References van Piggelen, H.E., T. Brandsma, H. Manders, and J. F. Lichtenauer, 2011: Automatic Curve Extraction for Digitizing Rainfall Strip Charts. J. Atmos. Oceanic Technol., 28, 891-906. Acknowledgements Financial support for this research by DURERO Project (Env.C1.3913442) is greatly appreciated.

  16. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  17. iMSRC: converting a standard automated microscope into an intelligent screening platform.

    PubMed

    Carro, Angel; Perez-Martinez, Manuel; Soriano, Joaquim; Pisano, David G; Megias, Diego

    2015-05-27

    Microscopy in the context of biomedical research is demanding new tools to automatically detect and capture objects of interest. The few extant packages addressing this need, however, have enjoyed limited uptake due to complexity of use and installation. To overcome these drawbacks, we developed iMSRC, which combines ease of use and installation with high flexibility and enables applications such as rare event detection and high-resolution tissue sample screening, saving time and resources.

  18. ST Spot Detector: a web-based application for automatic spot and tissue detection for spatial Transcriptomics image datasets.

    PubMed

    Wong, Kim; Navarro, José Fernández; Bergenstråhle, Ludvig; Ståhl, Patrik L; Lundeberg, Joakim

    2018-06-01

    Spatial Transcriptomics (ST) is a method which combines high resolution tissue imaging with high troughput transcriptome sequencing data. This data must be aligned with the images for correct visualization, a process that involves several manual steps. Here we present ST Spot Detector, a web tool that automates and facilitates this alignment through a user friendly interface. jose.fernandez.navarro@scilifelab.se. Supplementary data are available at Bioinformatics online.

  19. Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography.

    PubMed

    Ahlers, C; Simader, C; Geitzenauer, W; Stock, G; Stetson, P; Dastmalchi, S; Schmidt-Erfurth, U

    2008-02-01

    A limited number of scans compromise conventional optical coherence tomography (OCT) to track chorioretinal disease in its full extension. Failures in edge-detection algorithms falsify the results of retinal mapping even further. High-definition-OCT (HD-OCT) is based on raster scanning and was used to visualise the localisation and volume of intra- and sub-pigment-epithelial (RPE) changes in fibrovascular pigment epithelial detachments (fPED). Two different scanning patterns were evaluated. 22 eyes with fPED were imaged using a frequency-domain, high-speed prototype of the Cirrus HD-OCT. The axial resolution was 6 mum, and the scanning speed was 25 kA scans/s. Two different scanning patterns covering an area of 6 x 6 mm in the macular retina were compared. Three-dimensional topographic reconstructions and volume calculations were performed using MATLAB-based automatic segmentation software. Detailed information about layer-specific distribution of fluid accumulation and volumetric measurements can be obtained for retinal- and sub-RPE volumes. Both raster scans show a high correlation (p<0.01; R2>0.89) of measured values, that is PED volume/area, retinal volume and mean retinal thickness. Quality control of the automatic segmentation revealed reasonable results in over 90% of the examinations. Automatic segmentation allows for detailed quantitative and topographic analysis of the RPE and the overlying retina. In fPED, the 128 x 512 scanning-pattern shows mild advantages when compared with the 256 x 256 scan. Together with the ability for automatic segmentation, HD-OCT clearly improves the clinical monitoring of chorioretinal disease by adding relevant new parameters. HD-OCT is likely capable of enhancing the understanding of pathophysiology and benefits of treatment for current anti-CNV strategies in future.

  20. Segmentation of Nerve Bundles and Ganglia in Spine MRI Using Particle Filters

    PubMed Central

    Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina

    2011-01-01

    Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741

  1. Segmentation of nerve bundles and ganglia in spine MRI using particle filters.

    PubMed

    Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina

    2011-01-01

    Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation.

  2. High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications

    NASA Astrophysics Data System (ADS)

    Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.

    2012-07-01

    The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.

  3. Geospatial Image Mining For Nuclear Proliferation Detection: Challenges and New Opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L; Cheriyadat, Anil M

    2010-01-01

    With increasing understanding and availability of nuclear technologies, and increasing persuasion of nuclear technologies by several new countries, it is increasingly becoming important to monitor the nuclear proliferation activities. There is a great need for developing technologies to automatically or semi-automatically detect nuclear proliferation activities using remote sensing. Images acquired from earth observation satellites is an important source of information in detecting proliferation activities. High-resolution remote sensing images are highly useful in verifying the correctness, as well as completeness of any nuclear program. DOE national laboratories are interested in detecting nuclear proliferation by developing advanced geospatial image mining algorithms. Inmore » this paper we describe the current understanding of geospatial image mining techniques and enumerate key gaps and identify future research needs in the context of nuclear proliferation.« less

  4. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  5. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images.

    PubMed

    Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura

    2016-01-01

    The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.

  6. Integration of Point Clouds from Terrestrial Laser Scanning and Image-Based Matching for Generating High-Resolution Orthoimages

    NASA Astrophysics Data System (ADS)

    Salach, A.; Markiewicza, J. S.; Zawieska, D.

    2016-06-01

    An orthoimage is one of the basic photogrammetric products used for architectural documentation of historical objects; recently, it has become a standard in such work. Considering the increasing popularity of photogrammetric techniques applied in the cultural heritage domain, this research examines the two most popular measuring technologies: terrestrial laser scanning, and automatic processing of digital photographs. The basic objective of the performed works presented in this paper was to optimize the quality of generated high-resolution orthoimages using integration of data acquired by a Z+F 5006 terrestrial laser scanner and a Canon EOS 5D Mark II digital camera. The subject was one of the walls of the "Blue Chamber" of the Museum of King Jan III's Palace at Wilanów (Warsaw, Poland). The high-resolution images resulting from integration of the point clouds acquired by the different methods were analysed in detail with respect to geometric and radiometric correctness.

  7. A Study on the Development of a Robot-Assisted Automatic Laser Hair Removal System

    PubMed Central

    Lim, Hyoung-woo; Park, Sungwoo; Noh, Seungwoo; Lee, Dong-Hun; Yoon, Chiyul; Koh, Wooseok; Kim, Youdan; Chung, Jin Ho; Kim, Hee Chan

    2014-01-01

    Abstract Background and Objective: The robot-assisted automatic laser hair removal (LHR) system is developed to automatically detect any arbitrary shape of the desired LHR treatment area and to provide uniform laser irradiation to the designated skin area. Methods: For uniform delivery of laser energy, a unit of a commercial LHR device, a laser distance sensor, and a high-resolution webcam are attached at the six axis industrial robot's end-effector, which can be easily controlled using a graphical user interface (GUI). During the treatment, the system provides real-time treatment progress as well as the total number of “pick and place” automatically. Results: During the test, it was demonstrated that the arbitrary shapes were detected, and that the laser was delivered uniformly. The localization error test and the area-per-spot test produced satisfactory outcome averages of 1.04 mm error and 38.22 mm2/spot, respectively. Conclusions: Results showed that the system successfully demonstrated accuracy and effectiveness. The proposed system is expected to become a promising device in LHR treatment. PMID:25343281

  8. A study on the development of a robot-assisted automatic laser hair removal system.

    PubMed

    Lim, Hyoung-Woo; Park, Sungwoo; Noh, Seungwoo; Lee, Dong-Hun; Yoon, Chiyul; Koh, Wooseok; Kim, Youdan; Chung, Jin Ho; Kim, Hee Chan; Kim, Sungwan

    2014-11-01

    Abstract Background and Objective: The robot-assisted automatic laser hair removal (LHR) system is developed to automatically detect any arbitrary shape of the desired LHR treatment area and to provide uniform laser irradiation to the designated skin area. For uniform delivery of laser energy, a unit of a commercial LHR device, a laser distance sensor, and a high-resolution webcam are attached at the six axis industrial robot's end-effector, which can be easily controlled using a graphical user interface (GUI). During the treatment, the system provides real-time treatment progress as well as the total number of "pick and place" automatically. During the test, it was demonstrated that the arbitrary shapes were detected, and that the laser was delivered uniformly. The localization error test and the area-per-spot test produced satisfactory outcome averages of 1.04 mm error and 38.22 mm(2)/spot, respectively. RESULTS showed that the system successfully demonstrated accuracy and effectiveness. The proposed system is expected to become a promising device in LHR treatment.

  9. High Resolution X-Ray Micro-CT of Ultra-Thin Wall Space Components

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rauser, R. W.; Bowman, Randy R.; Bonacuse, Peter; Martin, Richard E.; Locci, I. E.; Kelley, M.

    2012-01-01

    A high resolution micro-CT system has been assembled and is being used to provide optimal characterization for ultra-thin wall space components. The Glenn Research Center NDE Sciences Team, using this CT system, has assumed the role of inspection vendor for the Advanced Stirling Convertor (ASC) project at NASA. This article will discuss many aspects of the development of the CT scanning for this type of component, including CT system overview; inspection requirements; process development, software utilized and developed to visualize, process, and analyze results; calibration sample development; results on actual samples; correlation with optical/SEM characterization; CT modeling; and development of automatic flaw recognition software. Keywords: Nondestructive Evaluation, NDE, Computed Tomography, Imaging, X-ray, Metallic Components, Thin Wall Inspection

  10. a New Object-Based Framework to Detect Shodows in High-Resolution Satellite Imagery Over Urban Areas

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.

  11. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Operational multisensor sea ice concentration algorithm utilizing Sentinel-1 and AMSR2 data

    NASA Astrophysics Data System (ADS)

    Dinessen, Frode

    2017-04-01

    The Norwegian Ice Service provide ice charts of the European part of the Arctic every weekday. The charts are produced from a manually interpretation of satellite data where SAR (Synthetic Aperture Radar) data plays a central role because of its high spatial resolution and Independence of cloud cover. A new chart is produced every weekday and the charts are distributed through the CMEMS portal. After the launch of Sentinel-1A and B the number of available SAR data have significant increased making it difficult to utilize all the data in a manually process. This in combination with a user demand for a more frequent update of the ice conditions, also during the weekends, have made it important to focus the development on utilizing the high resolution Sentinel-1 data in an automatic sea ice concentration analysis. The algorithm developed here is based on a multi sensor approach using an optimal interpolation to combine sea ice concentration products derived from Sentinel-1 and passive microwave data from AMSR2. The Sentinel-1 data is classified with a Bayesian SAR classification algorithm using data in extra wide mode dual polarization (HH/HV) to separate ice and water in the full 40x40 meter spatial resolution. From the classification of ice/water the sea ice concentration is estimated by calculating amount of ice within an area of 1x1 km. The AMSR2 sea ice concentration are produced as part of the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF) project and utilize the 89 GHz channel to produce a concentration product with a 3km spatial resolution. Results from the automatic classification will be presented.

  13. Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Sheng, Yongwei

    2000-12-01

    Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the canopy surface of a dense redwood stand using tri-ocular high-resolution images scanned from 1:2,400 aerial photographs. The results demonstrate the approach's ability to reconstruct complicated stands. The model-based approach proposed in this thesis is potentially applicable to other surfaces recovering problems with a priori knowledge about objects.

  14. DR HAGIS-a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients.

    PubMed

    Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall

    2017-01-01

    A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).

  15. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  16. iMSRC: converting a standard automated microscope into an intelligent screening platform

    PubMed Central

    Carro, Angel; Perez-Martinez, Manuel; Soriano, Joaquim; Pisano, David G.; Megias, Diego

    2015-01-01

    Microscopy in the context of biomedical research is demanding new tools to automatically detect and capture objects of interest. The few extant packages addressing this need, however, have enjoyed limited uptake due to complexity of use and installation. To overcome these drawbacks, we developed iMSRC, which combines ease of use and installation with high flexibility and enables applications such as rare event detection and high-resolution tissue sample screening, saving time and resources. PMID:26015081

  17. High-speed event detector for embedded nanopore bio-systems.

    PubMed

    Huang, Yiyun; Magierowski, Sebastian; Ghafar-Zadeh, Ebrahim; Wang, Chengjie

    2015-08-01

    Biological measurements of microscopic phenomena often deal with discrete-event signals. The ability to automatically carry out such measurements at high-speed in a miniature embedded system is desirable but compromised by high-frequency noise along with practical constraints on filter quality and sampler resolution. This paper presents a real-time event-detection method in the context of nanopore sensing that helps to mitigate these drawbacks and allows accurate signal processing in an embedded system. Simulations show at least a 10× improvement over existing on-line detection methods.

  18. Flexible Energy Scheduling Tool for Integrating Variable Generation | Grid

    Science.gov Websites

    , security-constrained economic dispatch, and automatic generation control programs. DOWNLOAD PAPER Electric commitment, security-constrained economic dispatch, and automatic generation control sub-models. Each sub resolutions and operating strategies can be explored. FESTIV produces not only economic metrics but also

  19. Rise to SUMMIT: the Sydney University Multiple-Mirror Telescope

    NASA Astrophysics Data System (ADS)

    Moore, Anna M.; Davis, John

    2000-07-01

    The Sydney University Multiple Mirror Telescope (SUMMIT) is a medium-sized telescope designed specifically for high resolution stellar spectroscopy. Throughout the design emphasis has been placed on high efficiency at low cost. The telescope consists of four 0.46 m diameter mirrors mounted on a single welded steel frame. Specially designed mirror cells support and point each mirror, allowing accurate positioning of the images on optical fibers located at the foci of the mirrors. Four fibers convey the light to the future location of a high resolution spectrograph away from the telescope in a stable environment. An overview of the commissioning of the telescope is presented, including the guidance and automatic mirror alignment and focussing systems. SUMMIT is located alongside the Sydney University Stellar Interferometer at the Paul Wild Observatory, near Narrabri, Northern New South Wales.

  20. Object oriented classification of high resolution data for inventory of horticultural crops

    NASA Astrophysics Data System (ADS)

    Hebbar, R.; Ravishankar, H. M.; Trivedi, S.; Subramoniam, S. R.; Uday, R.; Dadhwal, V. K.

    2014-11-01

    High resolution satellite images are associated with large variance and thus, per pixel classifiers often result in poor accuracy especially in delineation of horticultural crops. In this context, object oriented techniques are powerful and promising methods for classification. In the present study, a semi-automatic object oriented feature extraction model has been used for delineation of horticultural fruit and plantation crops using Erdas Objective Imagine. Multi-resolution data from Resourcesat LISS-IV and Cartosat-1 have been used as source data in the feature extraction model. Spectral and textural information along with NDVI were used as inputs for generation of Spectral Feature Probability (SFP) layers using sample training pixels. The SFP layers were then converted into raster objects using threshold and clump function resulting in pixel probability layer. A set of raster and vector operators was employed in the subsequent steps for generating thematic layer in the vector format. This semi-automatic feature extraction model was employed for classification of major fruit and plantations crops viz., mango, banana, citrus, coffee and coconut grown under different agro-climatic conditions. In general, the classification accuracy of about 75-80 per cent was achieved for these crops using object based classification alone and the same was further improved using minimal visual editing of misclassified areas. A comparison of on-screen visual interpretation with object oriented approach showed good agreement. It was observed that old and mature plantations were classified more accurately while young and recently planted ones (3 years or less) showed poor classification accuracy due to mixed spectral signature, wider spacing and poor stands of plantations. The results indicated the potential use of object oriented approach for classification of high resolution data for delineation of horticultural fruit and plantation crops. The present methodology is applicable at local levels and future development is focused on up-scaling the methodology for generation of fruit and plantation crop maps at regional and national level which is important for creation of database for overall horticultural crop development.

  1. Using novel acoustic and visual mapping tools to predict the small-scale spatial distribution of live biogenic reef framework in cold-water coral habitats

    NASA Astrophysics Data System (ADS)

    De Clippele, L. H.; Gafeira, J.; Robert, K.; Hennige, S.; Lavaleye, M. S.; Duineveld, G. C. A.; Huvenne, V. A. I.; Roberts, J. M.

    2017-03-01

    Cold-water corals form substantial biogenic habitats on continental shelves and in deep-sea areas with topographic highs, such as banks and seamounts. In the Atlantic, many reef and mound complexes are engineered by Lophelia pertusa, the dominant framework-forming coral. In this study, a variety of mapping approaches were used at a range of scales to map the distribution of both cold-water coral habitats and individual coral colonies at the Mingulay Reef Complex (west Scotland). The new ArcGIS-based British Geological Survey (BGS) seabed mapping toolbox semi-automatically delineated over 500 Lophelia reef `mini-mounds' from bathymetry data with 2-m resolution. The morphometric and acoustic characteristics of the mini-mounds were also automatically quantified and captured using this toolbox. Coral presence data were derived from high-definition remotely operated vehicle (ROV) records and high-resolution microbathymetry collected by a ROV-mounted multibeam echosounder. With a resolution of 0.35 × 0.35 m, the microbathymetry covers 0.6 km2 in the centre of the study area and allowed identification of individual live coral colonies in acoustic data for the first time. Maximum water depth, maximum rugosity, mean rugosity, bathymetric positioning index and maximum current speed were identified as the environmental variables that contributed most to the prediction of live coral presence. These variables were used to create a predictive map of the likelihood of presence of live cold-water coral colonies in the area of the Mingulay Reef Complex covered by the 2-m resolution data set. Predictive maps of live corals across the reef will be especially valuable for future long-term monitoring surveys, including those needed to understand the impacts of global climate change. This is the first study using the newly developed BGS seabed mapping toolbox and an ROV-based microbathymetric grid to explore the environmental variables that control coral growth on cold-water coral reefs.

  2. A high-resolution optical imaging system for obtaining the serial transverse section images of biologic tissue

    NASA Astrophysics Data System (ADS)

    Wu, Li; Zhang, Bin; Wu, Ping; Liu, Qian; Gong, Hui

    2007-05-01

    A high-resolution optical imaging system was designed and developed to obtain the serial transverse section images of the biologic tissue, such as the mouse brain, in which new knife-edge imaging technology, high-speed and high-sensitive line-scan CCD and linear air bearing stages were adopted and incorporated with an OLYMPUS microscope. The section images on the tip of the knife-edge were synchronously captured by the reflection imaging in the microscope while cutting the biologic tissue. The biologic tissue can be sectioned at interval of 250 nm with the same resolution of the transverse section images obtained in x and y plane. And the cutting job can be automatically finished based on the control program wrote specially in advance, so we save the mass labor of the registration of the vast images data. In addition, by using this system a larger sample can be cut than conventional ultramicrotome so as to avoid the loss of the tissue structure information because of splitting the tissue sample to meet the size request of the ultramicrotome.

  3. Using a high spatial resolution tactile sensor for intention detection.

    PubMed

    Castellini, Claudio; Koiva, Risto

    2013-06-01

    Intention detection is the interpretation of biological signals with the aim of automatically, reliably and naturally understanding what a human subject desires to do. Although intention detection is not restricted to disabled people, such methods can be crucial in improving a patient's life, e.g., aiding control of a robotic wheelchair or of a self-powered prosthesis. Traditionally, intention detection is done using, e.g., gaze tracking, surface electromyography and electroencephalography. In this paper we present exciting initial results of an experiment aimed at intention detection using a high-spatial-resolution, high-dynamic-range tactile sensor. The tactile image of the ventral side of the forearm of 9 able-bodied participants was recorded during a variable-force task stimulated at the fingertip. Both the forces at the fingertip and at the forearm were synchronously recorded. We show that a standard dimensionality reduction technique (Principal Component Analysis) plus a Support Vector Machine attain almost perfect detection accuracy of the direction and the intensity of the intended force. This paves the way for high spatial resolution tactile sensors to be used as a means for intention detection.

  4. Quantitative micro-CT based coronary artery profiling using interactive local thresholding and cylindrical coordinates.

    PubMed

    Panetta, Daniele; Pelosi, Gualtiero; Viglione, Federica; Kusmic, Claudia; Terreni, Marianna; Belcari, Nicola; Guerra, Alberto Del; Athanasiou, Lambros; Exarchos, Themistoklis; Fotiadis, Dimitrios I; Filipovic, Nenad; Trivella, Maria Giovanna; Salvadori, Piero A; Parodi, Oberdan

    2015-01-01

    Micro-CT is an established imaging technique for high-resolution non-destructive assessment of vascular samples, which is gaining growing interest for investigations of atherosclerotic arteries both in humans and in animal models. However, there is still a lack in the definition of micro-CT image metrics suitable for comprehensive evaluation and quantification of features of interest in the field of experimental atherosclerosis (ATS). A novel approach to micro-CT image processing for profiling of coronary ATS is described, providing comprehensive visualization and quantification of contrast agent-free 3D high-resolution reconstruction of full-length artery walls. Accelerated coronary ATS has been induced by high fat cholesterol-enriched diet in swine and left coronary artery (LCA) harvested en bloc for micro-CT scanning and histologic processing. A cylindrical coordinate system has been defined on the image space after curved multiplanar reformation of the coronary vessel for the comprehensive visualization of the main vessel features such as wall thickening and calcium content. A novel semi-automatic segmentation procedure based on 2D histograms has been implemented and the quantitative results validated by histology. The potentiality of attenuation-based micro-CT at low kV to reliably separate arterial wall layers from adjacent tissue as well as identify wall and plaque contours and major tissue components has been validated by histology. Morphometric indexes from histological data corresponding to several micro-CT slices have been derived (double observer evaluation at different coronary ATS stages) and highly significant correlations (R2 > 0.90) evidenced. Semi-automatic morphometry has been validated by double observer manual morphometry of micro-CT slices and highly significant correlations were found (R2 > 0.92). The micro-CT methodology described represents a handy and reliable tool for quantitative high resolution and contrast agent free full length coronary wall profiling, able to assist atherosclerotic vessels morphometry in a preclinical experimental model of coronary ATS and providing a link between in vivo imaging and histology.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less

  6. Enhanced FIB-SEM systems for large-volume 3D imaging.

    PubMed

    Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F

    2017-05-13

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 µm 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.

  7. Earth observation data based rapid flood-extent modelling for tsunami-devastated coastal areas

    NASA Astrophysics Data System (ADS)

    Hese, Sören; Heyer, Thomas

    2016-04-01

    Earth observation (EO)-based mapping and analysis of natural hazards plays a critical role in various aspects of post-disaster aid management. Spatial very high-resolution Earth observation data provide important information for managing post-tsunami activities on devastated land and monitoring re-cultivation and reconstruction. The automatic and fast use of high-resolution EO data for rapid mapping is, however, complicated by high spectral variability in densely populated urban areas and unpredictable textural and spectral land-surface changes. The present paper presents the results of the SENDAI project, which developed an automatic post-tsunami flood-extent modelling concept using RapidEye multispectral satellite data and ASTER Global Digital Elevation Model Version 2 (GDEM V2) data of the eastern coast of Japan (captured after the Tohoku earthquake). In this paper, the authors developed both a bathtub-modelling approach and a cost-distance approach, and integrated the roughness parameters of different land-use types to increase the accuracy of flood-extent modelling. Overall, the accuracy of the developed models reached 87-92%, depending on the analysed test site. The flood-modelling approach was explained and results were compared with published approaches. We came to the conclusion that the cost-factor-based approach reaches accuracy comparable to published results from hydrological modelling. However the proposed cost-factor approach is based on a much simpler dataset, which is available globally.

  8. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks.

    PubMed

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl; Mathiassen, Solvejg Kopp; Somerville, Gayle J; Jørgensen, Rasmus Nyholm

    2018-05-16

    This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.

  9. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  10. Do Judgments of Learning Predict Automatic Influences of Memory?

    ERIC Educational Resources Information Center

    Undorf, Monika; Böhm, Simon; Cüpper, Lutz

    2016-01-01

    Current memory theories generally assume that memory performance reflects both recollection and automatic influences of memory. Research on people's predictions about the likelihood of remembering recently studied information on a memory test, that is, on judgments of learning (JOLs), suggests that both magnitude and resolution of JOLs are linked…

  11. Liquid chromatography with high resolution mass spectrometry for identification of organic contaminants in fish fillet: screening and quantification assessment using two scan modes for data acquisition.

    PubMed

    Munaretto, Juliana S; May, Marília M; Saibt, Nathália; Zanella, Renato

    2016-07-22

    This study proposed a strategy to identify and quantify 182 organic contaminants from different chemical classes, as for instance pesticides, veterinary drug and personal care products, in fish fillet using liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (LC-QToF/MS). For this purpose, two different scan methods (full scan and all ions MS/MS) were evaluated to assess the best option for screening analysis in spiked fish fillet samples. In general, full scan acquisition was found to be more reliable (84%) in the automatic identification and quantification when compared to all ions MS/MS with 72% of the compounds detected. Additionally, a qualitative automatic search showed a mass accuracy error below 5ppm for 77% of the compounds in full scan mode compared to only 52% in all ions MS/MS scan. However, all ions MS/MS provides fragmentation information of the target compounds. Undoubtedly, structural information of a wide number of compounds can be obtained using high resolution mass spectrometry (HRMS), but it is necessary thoroughly assess it, in order to choose the best scan mode. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Development of an automatic test equipment for nano gauging displacement transducers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Chen; Jywe, Wen-Yuh; Liu, Chien-Hung

    2005-01-01

    In order to satisfy the increasing demands on the precision in manufacturing technology, nanaometrology gradually becomes more important in manufacturing process. To ensure the precision of manufacture, precise measuring instruments and sensors play a decesive role for the accurate characterization and inspection of products. For linear length inspection, high precision gauging displacement transducers, i.e. nano gauging displacement transducers (NGDT), have been often utilized, which have been often utilized, which have the resolution in the nanometer range and can achieve an accuracy of less than 100 nm. Such measurement instruments include transducers based on electronic as well as optical measurement principles, e.g. inductive, incremental-optical or interference optical. To guarantee the accuracy and the traceability to the definition of the meter, calibration and test of NGDT are essential. Currently, there are some methods and machines for test of NGDT, but they suffer from various disadvantages. Some of them permit only manual test procedures which are time-consuming, e.g. with high accurate gauge blocks as material measures. Other tests can reach higher accuracy only in the micrometer range or result in uncertainties of more than 100 nm in the large measuring ranges. To realize the test of NGDT with a high resolution as well as a large measuring range, an automatic test equipment was constructed, that has a resolution of 1.24 nm, a measuring range of up to 20 nm (60 mm) and a measuring uncertainty of approximate ±10 nm can fulfil the requirements of high resolution within the nanometer range while simultaneously covering a large measuring range in the order of millimeters. The test system includes a stable frame, a polarization interferometer, an angle sensor, an angular control, a drive system and piezo translators. During the test procedure, the angular control and piezo translators minimize the Abbe error. For the automation of the test procedure a measuring program adhering to the measurement principle outlined in VDI/VDE 2617 guidelines was designed. With this program NGDT can be tested in less than thirty minutes with eleven measuring points and five repetitions. By mean of theoretical and experimental investigations it can be proved that the automatic test system achieves a test uncertainty of approx. ±10 nm at the measuring range of 18 mm, that corresponds to a relative uncertainty of approximately ±5 × 10-7. With small uncertainty, the minimization of the Abbe error and short test time, this system can be regarded as a universal and efficient precision test equipment, which is available for the accurate test of arbitrary high precision gauging displacement transducers.

  13. An ROI multi-resolution compression method for 3D-HEVC

    NASA Astrophysics Data System (ADS)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  14. Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas

    NASA Astrophysics Data System (ADS)

    Pawłuszek, K.; Borkowski, A.; Tarolli, P.

    2017-05-01

    Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.

  15. SEM-microphotogrammetry, a new take on an old method for generating high-resolution 3D models from SEM images.

    PubMed

    Ball, A D; Job, P A; Walker, A E L

    2017-08-01

    The method we present here uses a scanning electron microscope programmed via macros to automatically capture dozens of images at suitable angles to generate accurate, detailed three-dimensional (3D) surface models with micron-scale resolution. We demonstrate that it is possible to use these Scanning Electron Microscope (SEM) images in conjunction with commercially available software originally developed for photogrammetry reconstructions from Digital Single Lens Reflex (DSLR) cameras and to reconstruct 3D models of the specimen. These 3D models can then be exported as polygon meshes and eventually 3D printed. This technique offers the potential to obtain data suitable to reconstruct very tiny features (e.g. diatoms, butterfly scales and mineral fabrics) at nanometre resolution. Ultimately, we foresee this as being a useful tool for better understanding spatial relationships at very high resolution. However, our motivation is also to use it to produce 3D models to be used in public outreach events and exhibitions, especially for the blind or partially sighted. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  16. A full range detector for the HIRRBS high resolution RBS magnetic spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skala, Wayne G.; Haberl, Arthur W.; Bakhru, Hassaram

    2013-04-19

    The UAlbany HIRRBS (High Resolution RBS) system has been updated for better use in rapid analysis. The focal plane detector now covers the full range from U down to O using a linear stepper motor to translate the 1-cm detector across the 30-cm range. Input is implemented with zero-back-angle operation in all cases. The chamber has been modified to allow for quick swapping of sample holders, including a channeling goniometer. A fixed standard surface-barrier detector allows for normal RBS simultaneously with use of the magnetic spectrometer. The user can select a region on the standard spectrum or can select anmore » element edge or an energy point for collection of the expanded spectrum portion. The best resolution currently obtained is about 2-to-3 keV, probably representing the energy width of the incoming beam. Calibration is maintained automatically for any spectrum portion and any beam energy from 1.0 to 3.5 MeV. Element resolving power, sensitivity and depth resolution are shown using several examples. Examples also show the value of simultaneous conventional RBS.« less

  17. Aberration control in 4Pi nanoscopy: definitions, properties, and applications (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hao, Xiang; Allgeyer, Edward S.; Velasco, Mary Grace M.; Booth, Martin J.; Bewersdorf, Joerg

    2016-03-01

    The development of fluorescence microscopy, which allows live-cell imaging with high labeling specificity, has made the visualization of cellular architecture routine. However, for centuries, the spatial resolution of optical microscopy was fundamentally limited by diffraction. The past two decades have seen a revolution in far-field optical nanoscopy (or "super-resolution" microscopy). The best 3D resolution is achieved by optical nanoscopes like the isoSTED or the iPALM/4Pi-SMS, which utilize two opposing objective lenses in a coherent manner. These system are, however, also more complex and the required interference conditions demand precise aberration control. Our research involves developing novel adaptive optics techniques that enable high spatial and temporal resolution imaging for biological applications. In this talk, we will discuss how adaptive optics can enhance dual-objective lens nanoscopes. We will demonstrate how adaptive optics devices provide unprecedented freedom to manipulate the light field in isoSTED nanoscopy, allow to realize automatic beam alignment, suppress the inherent side-lobes of the point-spread function, and dynamically compensate for sample-induced aberrations. We will present both the theoretical groundwork and the experimental confirmations.

  18. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  19. Collaborative Study of Analysis of High Resolution Infrared Atmospheric Spectra Between NASA Langley Research Center and the University of Denver

    NASA Technical Reports Server (NTRS)

    Goldman, Aaron

    1999-01-01

    The Langley-D.U. collaboration on the analysis of high resolution infrared atmospheric spectra covered a number of important studies of trace gases identification and quantification from field spectra, and spectral line parameters analysis. The collaborative work included: Quantification and monitoring of trace gases from ground-based spectra available from various locations and seasons and from balloon flights. Studies toward identification and quantification of isotopic species, mostly oxygen and Sulfur isotopes. Search for new species on the available spectra. Update of spectroscopic line parameters, by combining laboratory and atmospheric spectra with theoretical spectroscopy methods. Study of trends of atmosphere trace constituents. Algorithms developments, retrievals intercomparisons and automatization of the analysis of NDSC spectra, for both column amounts and vertical profiles.

  20. Enhancing Spatial Resolution of Remotely Sensed Imagery Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Beck, J. M.; Bridges, S.; Collins, C.; Rushing, J.; Graves, S. J.

    2017-12-01

    Researchers at the Information Technology and Systems Center at the University of Alabama in Huntsville are using Deep Learning with Convolutional Neural Networks (CNNs) to develop a method for enhancing the spatial resolutions of moderate resolution (10-60m) multispectral satellite imagery. This enhancement will effectively match the resolutions of imagery from multiple sensors to provide increased global temporal-spatial coverage for a variety of Earth science products. Our research is centered on using Deep Learning for automatically generating transformations for increasing the spatial resolution of remotely sensed images with different spatial, spectral, and temporal resolutions. One of the most important steps in using images from multiple sensors is to transform the different image layers into the same spatial resolution, preferably the highest spatial resolution, without compromising the spectral information. Recent advances in Deep Learning have shown that CNNs can be used to effectively and efficiently upscale or enhance the spatial resolution of multispectral images with the use of an auxiliary data source such as a high spatial resolution panchromatic image. In contrast, we are using both the spatial and spectral details inherent in low spatial resolution multispectral images for image enhancement without the use of a panchromatic image. This presentation will discuss how this technology will benefit many Earth Science applications that use remotely sensed images with moderate spatial resolutions.

  1. Mapping Land Cover and Land Use Changes in the Congo Basin Forests with Optical Satellite Remote Sensing: a Pilot Project Exploring Methodologies that Improve Spatial Resolution and Map Accuracy

    NASA Astrophysics Data System (ADS)

    Molinario, G.; Baraldi, A.; Altstatt, A. L.; Nackoney, J.

    2011-12-01

    The University of Maryland has been a USAID Central Africa Rregional Program for the Environment (CARPE) cross-cutting partner for many years, providing remote sensing derived information on forest cover and forest cover changes in support of CARPE's objectives of diminishing forest degradation, loss and biodiversity loss as a result of poor or inexistent land use planning strategies. Together with South Dakota State University, Congo Basin-wide maps have been provided that map forest cover loss at a maximum of 60m resolution, using Landsat imagery and higher resolution imagery for algorithm training and validation. However, to better meet the needs within the CARPE Landscapes, which call for higher resolution, more accurate land cover change maps, UMD has been exploring the use of the SIAM automatic spectral -rule classifier together with pan-sharpened Landsat data (15m resolution) and Very High Resolution imagery from various sources. The pilot project is being developed in collaboration with the African Wildlife Foundation in the Maringa Lopori Wamba CARPE Landscape. If successful in the future this methodology will make the creation of high resolution change maps faster and easier, making it accessible to other entities in the Congo Basin that need accurate land cover and land use change maps in order, for example, to create sustainable land use plans, conserve biodiversity and resources and prepare Reducing Emissions from forest Degradation and Deforestation (REDD) Measurement, Reporting and Verification (MRV) projects. The paper describes the need for higher resolution land cover change maps that focus on forest change dynamics such as the cycling between primary forests, secondary forest, agriculture and other expanding and intensifying land uses in the Maringa Lopori Wamba CARPE Landscape in the Equateur Province of the Democratic Republic of Congo. The Methodology uses the SIAM remote sensing imagery automatic spectral rule classifier, together with pan-sharpened Landsat imagery with 15m resolution and Very High Resolution imagery from different sensors, obtained from the Department of Defense database that was recently opened to NASA and its Earth Observation partners. Particular emphasis is placed on the detection of agricultural fields and their expansion in primary forests or intensification in secondary forests and fallow fields, as this is the primary driver of deforestation in this area. Fields in this area area also of very small size and irregular shapes, often partly obscured by neighboring forest canopy, hence the technical challenge of correctly detecting them and tracking them through time. Finally, the potential for use of this methodology in other regions where information on land cover changes is needed for land use sustainability planning, is also addressed.

  2. Vital Recorder-a free research tool for automatic recording of high-resolution time-synchronised physiological data from multiple anaesthesia devices.

    PubMed

    Lee, Hyung-Chul; Jung, Chul-Woo

    2018-01-24

    The current anaesthesia information management system (AIMS) has limited capability for the acquisition of high-quality vital signs data. We have developed a Vital Recorder program to overcome the disadvantages of AIMS and to support research. Physiological data of surgical patients were collected from 10 operating rooms using the Vital Recorder. The basic equipment used were a patient monitor, the anaesthesia machine, and the bispectral index (BIS) monitor. Infusion pumps, cardiac output monitors, regional oximeter, and rapid infusion device were added as required. The automatic recording option was used exclusively and the status of recording was frequently checked through web monitoring. Automatic recording was successful in 98.5% (4,272/4,335) cases during eight months of operation. The total recorded time was 13,489 h (3.2 ± 1.9 h/case). The Vital Recorder's automatic recording and remote monitoring capabilities enabled us to record physiological big data with minimal effort. The Vital Recorder also provided time-synchronised data captured from a variety of devices to facilitate an integrated analysis of vital signs data. The free distribution of the Vital Recorder is expected to improve data access for researchers attempting physiological data studies and to eliminate inequalities in research opportunities due to differences in data collection capabilities.

  3. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    NASA Astrophysics Data System (ADS)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  4. A new approach for automatic matching of ground control points in urban areas from heterogeneous images

    NASA Astrophysics Data System (ADS)

    Cong, Chao; Liu, Dingsheng; Zhao, Lingjun

    2008-12-01

    This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.

  5. Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance

    NASA Astrophysics Data System (ADS)

    Speck, Richard P.; Herz, Norman E., Jr.

    2000-06-01

    Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.

  6. A multiparametric assay for quantitative nerve regeneration evaluation.

    PubMed

    Weyn, B; van Remoortere, M; Nuydens, R; Meert, T; van de Wouwer, G

    2005-08-01

    We introduce an assay for the semi-automated quantification of nerve regeneration by image analysis. Digital images of histological sections of regenerated nerves are recorded using an automated inverted microscope and merged into high-resolution mosaic images representing the entire nerve. These are analysed by a dedicated image-processing package that computes nerve-specific features (e.g. nerve area, fibre count, myelinated area) and fibre-specific features (area, perimeter, myelin sheet thickness). The assay's performance and correlation of the automatically computed data with visually obtained data are determined on a set of 140 semithin sections from the distal part of a rat tibial nerve from four different experimental treatment groups (control, sham, sutured, cut) taken at seven different time points after surgery. Results show a high correlation between the manually and automatically derived data, and a high discriminative power towards treatment. Extra value is added by the large feature set. In conclusion, the assay is fast and offers data that currently can be obtained only by a combination of laborious and time-consuming tests.

  7. SolTrack: an automatic video processing software for in situ interface tracking.

    PubMed

    Griesser, S; Pierer, R; Reid, M; Dippenaar, R

    2012-10-01

    High-Resolution in situ observation of solidification experiments has become a powerful technique to improve the fundamental understanding of solidification processes of metals and alloys. In the present study, high-temperature laser-scanning confocal microscopy (HTLSCM) was utilized to observe and capture in situ solidification and phase transformations of alloys for subsequent post processing and analysis. Until now, this analysis has been very time consuming as frame-by-frame manual evaluation of propagating interfaces was used to determine the interface velocities. SolTrack has been developed using the commercial software package MATLAB and is designed to automatically detect, locate and track propagating interfaces during solidification and phase transformations as well as to calculate interfacial velocities. Different solidification phenomena have been recorded to demonstrate a wider spectrum of applications of this software. A validation, through comparison with manual evaluation, is included where the accuracy is shown to be very high. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.

  8. Markov Random Field Based Automatic Image Alignment for ElectronTomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moussavi, Farshid; Amat, Fernando; Comolli, Luis R.

    2007-11-30

    Cryo electron tomography (cryo-ET) is the primary method for obtaining 3D reconstructions of intact bacteria, viruses, and complex molecular machines ([7],[2]). It first flash freezes a specimen in a thin layer of ice, and then rotates the ice sheet in a transmission electron microscope (TEM) recording images of different projections through the sample. The resulting images are aligned and then back projected to form the desired 3-D model. The typical resolution of biological electron microscope is on the order of 1 nm per pixel which means that small imprecision in the microscope's stage or lenses can cause large alignment errors.more » To enable a high precision alignment, biologists add a small number of spherical gold beads to the sample before it is frozen. These beads generate high contrast dots in the image that can be tracked across projections. Each gold bead can be seen as a marker with a fixed location in 3D, which provides the reference points to bring all the images to a common frame as in the classical structure from motion problem. A high accuracy alignment is critical to obtain a high resolution tomogram (usually on the order of 5-15nm resolution). While some methods try to automate the task of tracking markers and aligning the images ([8],[4]), they require user intervention if the SNR of the image becomes too low. Unfortunately, cryogenic electron tomography (or cryo-ET) often has poor SNR, since the samples are relatively thick (for TEM) and the restricted electron dose usually results in projections with SNR under 0 dB. This paper shows that formulating this problem as a most-likely estimation task yields an approach that is able to automatically align with high precision cryo-ET datasets using inference in graphical models. This approach has been packaged into a publicly available software called RAPTOR-Robust Alignment and Projection estimation for Tomographic Reconstruction.« less

  9. Automatic Coregistration for Multiview SAR Images in Urban Areas

    NASA Astrophysics Data System (ADS)

    Xiang, Y.; Kang, W.; Wang, F.; You, H.

    2017-09-01

    Due to the high resolution property and the side-looking mechanism of SAR sensors, complex buildings structures make the registration of SAR images in urban areas becomes very hard. In order to solve the problem, an automatic and robust coregistration approach for multiview high resolution SAR images is proposed in the paper, which consists of three main modules. First, both the reference image and the sensed image are segmented into two parts, urban areas and nonurban areas. Urban areas caused by double or multiple scattering in a SAR image have a tendency to show higher local mean and local variance values compared with general homogeneous regions due to the complex structural information. Based on this criterion, building areas are extracted. After obtaining the target regions, L-shape structures are detected using the SAR phase congruency model and Hough transform. The double bounce scatterings formed by wall and ground are shown as strong L- or T-shapes, which are usually taken as the most reliable indicator for building detection. According to the assumption that buildings are rectangular and flat models, planimetric buildings are delineated using the L-shapes, then the reconstructed target areas are obtained. For the orignal areas and the reconstructed target areas, the SAR-SIFT matching algorithm is implemented. Finally, correct corresponding points are extracted by the fast sample consensus (FSC) and the transformation model is also derived. The experimental results on a pair of multiview TerraSAR images with 1-m resolution show that the proposed approach gives a robust and precise registration performance, compared with the orignal SAR-SIFT method.

  10. Three-dimensional murine airway segmentation in micro-CT images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.

    2007-03-01

    Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.

  11. X-ray phase contrast tomography from whole organ down to single cells

    NASA Astrophysics Data System (ADS)

    Krenkel, Martin; Töpperwien, Mareike; Bartels, Matthias; Lingor, Paul; Schild, Detlev; Salditt, Tim

    2014-09-01

    We use propagation based hard x-ray phase contrast tomography to explore the three dimensional structure of neuronal tissues from the organ down to sub-cellular level, based on combinations of synchrotron radiation and laboratory sources. To this end a laboratory based microfocus tomography setup has been built in which the geometry was optimized for phase contrast imaging and tomography. By utilizing phase retrieval algorithms, quantitative reconstructions can be obtained that enable automatic renderings without edge artifacts. A high brightness liquid metal microfocus x-ray source in combination with a high resolution detector yielding a resolution down to 1.5 μm. To extend the method to nanoscale resolution we use a divergent x-ray waveguide beam geometry at the synchrotron. Thus, the magnification can be easily tuned by placing the sample at different defocus distances. Due to the small Fresnel numbers in this geometry the measured images are of holographic nature which poses a challenge in phase retrieval.

  12. On the combined use of high temporal resolution, optical satellite data for flood monitoring and mapping: a possible contribution from the RST approach

    NASA Astrophysics Data System (ADS)

    Faruolo, M.; Coviello, I.; Lacava, T.; Pergola, N.; Tramutoli, V.

    2009-04-01

    Among natural disasters, floods are ones of those more common and devastating, often causing high environmental, economical and social costs. When a flooding event occurs, timely information about precise location, extent, dynamic evolution, etc., is highly required in order to effectively support civil protection activities aimed at managing the emergency. Satellite remote sensing may represent a supplementary information source, providing mapping and continuous monitoring of flooding extent as well as a quick damage assessment. Such purposes need frequently updated satellite images as well as suitable image processing techniques, able to identify flooded areas with reliability and timeliness. Recently, an innovative satellite data analysis approach (named RST, Robust Satellite Technique) has been applied to NOAA-AVHRR (Advanced Very High Resolution Radiometer) satellite data in order to dynamically map flooded areas. Thanks to a multi-temporal analysis of co-located satellite records and an automatic change detection scheme, such an approach allows to overcome major drawbacks related to the previously proposed methods (mostly not automatic and based on empirically chosen thresholds, often affected by false identifications). In this paper, RST approach has been for the first time applied to both AVHRR and EOS/MODIS (Moderate Resolution Imaging Spectroradiometer) data, in order to assess its potential - in flooded area mapping and monitoring - on different satellite packages characterized by different spectral and spatial resolutions. As a study case, the flooding event which hit the Europe in August 2002 has been selected. Preliminary results shown in this study seem to confirm the potential of such an approach in providing reliable and timely information, useful for near real time flood hazard assessment and monitoring, using both MODIS and AVHRR data. Moreover, the combined use of information coming from both satellite packages (easily achievable thanks to the intrinsic RST exportability on different sensors) significantly improves (from 6 to less than 3 hours) surface sampling rate, reducing the negative impact of cloud coverage, currently one of the main limit of this kind of satellite technology.

  13. Unsupervised Clustering of Subcellular Protein Expression Patterns in High-Throughput Microscopy Images Reveals Protein Complexes and Functional Relationships between Proteins

    PubMed Central

    Handfield, Louis-François; Chong, Yolanda T.; Simmons, Jibril; Andrews, Brenda J.; Moses, Alan M.

    2013-01-01

    Protein subcellular localization has been systematically characterized in budding yeast using fluorescently tagged proteins. Based on the fluorescence microscopy images, subcellular localization of many proteins can be classified automatically using supervised machine learning approaches that have been trained to recognize predefined image classes based on statistical features. Here, we present an unsupervised analysis of protein expression patterns in a set of high-resolution, high-throughput microscope images. Our analysis is based on 7 biologically interpretable features which are evaluated on automatically identified cells, and whose cell-stage dependency is captured by a continuous model for cell growth. We show that it is possible to identify most previously identified localization patterns in a cluster analysis based on these features and that similarities between the inferred expression patterns contain more information about protein function than can be explained by a previous manual categorization of subcellular localization. Furthermore, the inferred cell-stage associated to each fluorescence measurement allows us to visualize large groups of proteins entering the bud at specific stages of bud growth. These correspond to proteins localized to organelles, revealing that the organelles must be entering the bud in a stereotypical order. We also identify and organize a smaller group of proteins that show subtle differences in the way they move around the bud during growth. Our results suggest that biologically interpretable features based on explicit models of cell morphology will yield unprecedented power for pattern discovery in high-resolution, high-throughput microscopy images. PMID:23785265

  14. Intensity-hue-saturation-based image fusion using iterative linear regression

    NASA Astrophysics Data System (ADS)

    Cetin, Mufit; Tepecik, Abdulkadir

    2016-10-01

    The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.

  15. Nondestructive analysis of automotive paints with spectral domain optical coherence tomography.

    PubMed

    Dong, Yue; Lawman, Samuel; Zheng, Yalin; Williams, Dominic; Zhang, Jinke; Shen, Yao-Chun

    2016-05-01

    We have demonstrated for the first time, to our knowledge, the use of optical coherence tomography (OCT) as an analytical tool for nondestructively characterizing the individual paint layer thickness of multiple layered automotive paints. A graph-based segmentation method was used for automatic analysis of the thickness distribution for the top layers of solid color paints. The thicknesses measured with OCT were in good agreement with the optical microscope and ultrasonic techniques that are the current standard in the automobile industry. Because of its high axial resolution (5.5 μm), the OCT technique was shown to be able to resolve the thickness of individual paint layers down to 11 μm. With its high lateral resolution (12.4 μm), the OCT system was also able to measure the cross-sectional area of the aluminum flakes in a metallic automotive paint. The range of values measured was 300-1850  μm2. In summary, the proposed OCT is a noncontact, high-resolution technique that has the potential for inclusion as part of the quality assurance process in automobile coating.

  16. Programmable Multiple-Ramped-Voltage Power Supply

    NASA Technical Reports Server (NTRS)

    Ajello, Joseph M.; Howell, S. K.

    1993-01-01

    Ramp waveforms range up to 2,000 V. Laboratory high-voltage power-supply system puts out variety of stable voltages programmed to remain fixed with respect to ground or float with respect to ramp waveform. Measures voltages it produces with high resolution; automatically calibrates, zeroes, and configures itself; and produces variety of input/output signals for use with other instruments. Developed for use with ultraviolet spectrometer. Also applicable to control of electron guns in general and to operation of such diverse equipment used in measuring scattering cross sections of subatomic particles and in industrial electron-beam welders.

  17. Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data

    NASA Astrophysics Data System (ADS)

    d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter

    2010-12-01

    IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.

  18. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  19. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  20. CERES: A new cerebellum lobule segmentation method.

    PubMed

    Romero, Jose E; Coupé, Pierrick; Giraud, Rémi; Ta, Vinh-Thong; Fonov, Vladimir; Park, Min Tae M; Chakravarty, M Mallar; Voineskos, Aristotle N; Manjón, Jose V

    2017-02-15

    The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes). Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Building Extraction Based on Openstreetmap Tags and Very High Spatial Resolution Image in Urban Area

    NASA Astrophysics Data System (ADS)

    Kang, L.; Wang, Q.; Yan, H. W.

    2018-04-01

    How to derive contour of buildings from VHR images is the essential problem for automatic building extraction in urban area. To solve this problem, OSM data is introduced to offer vector contour information of buildings which is hard to get from VHR images. First, we import OSM data into database. The line string data of OSM with tags of building, amenity, office etc. are selected and combined into completed contours; Second, the accuracy of contours of buildings is confirmed by comparing with the real buildings in Google Earth; Third, maximum likelihood classification is conducted with the confirmed building contours, and the result demonstrates that the proposed approach is effective and accurate. The approach offers a new way for automatic interpretation of VHR images.

  2. High definition for systems biology of microbial communities: metagenomics gets genome-centric and strain-resolved.

    PubMed

    Turaev, Dmitrij; Rattei, Thomas

    2016-06-01

    The systems biology of microbial communities, organismal communities inhabiting all ecological niches on earth, has in recent years been strongly facilitated by the rapid development of experimental, sequencing and data analysis methods. Novel experimental approaches and binning methods in metagenomics render the semi-automatic reconstructions of near-complete genomes of uncultivable bacteria possible, while advances in high-resolution amplicon analysis allow for efficient and less biased taxonomic community characterization. This will also facilitate predictive modeling approaches, hitherto limited by the low resolution of metagenomic data. In this review, we pinpoint the most promising current developments in metagenomics. They facilitate microbial systems biology towards a systemic understanding of mechanisms in microbial communities with scopes of application in many areas of our daily life. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  4. A quantitative image cytometry technique for time series or population analyses of signaling networks.

    PubMed

    Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya

    2010-04-01

    Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.

  5. Beyond the resolution limit: subpixel resolution in animals and now in silicon

    NASA Astrophysics Data System (ADS)

    Wilcox, M. J.

    2007-09-01

    Automatic acquisition of aerial threats at thousands of kilometers distance requires high sensitivity to small differences in contrast and high optical quality for subpixel resolution, since targets occupy much less surface area than a single pixel. Targets travel at high speed and break up in the re-entry phase. Target/decoy discrimination at the earliest possible time is imperative. Real time performance requires a multifaceted approach with hyperspectral imaging and analog processing allowing feature extraction in real time. Hyperacuity Systems has developed a prototype chip capable of nonlinear increase in resolution or subpixel resolution far beyond either pixel size or spacing. Performance increase is due to a biomimetic implementation of animal retinas. Photosensitivity is not homogeneous across the sensor surface, allowing pixel parsing. It is remarkably simple to provide this profile to detectors and we showed at least three ways to do so. Individual photoreceptors have a Gaussian sensitivity profile and this nonlinear profile can be exploited to extract high-resolution. Adaptive, analog circuitry provides contrast enhancement, dynamic range setting with offset and gain control. Pixels are processed in parallel within modular elements called cartridges like photo-receptor inputs in fly eyes. These modular elements are connected by a novel function for a cell matrix known as L4. The system is exquisitely sensitive to small target motion and operates with a robust signal under degraded viewing conditions, allowing detection of targets smaller than a single pixel or at greater distance. Therefore, not only is instantaneous feature extraction possible but also subpixel resolution. Analog circuitry increases processing speed with more accurate motion specification for target tracking and identification.

  6. Mapping whole-brain activity with cellular resolution by light-sheet microscopy and high-throughput image analysis (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Silvestri, Ludovico; Rudinskiy, Nikita; Paciscopi, Marco; Müllenbroich, Marie Caroline; Costantini, Irene; Sacconi, Leonardo; Frasconi, Paolo; Hyman, Bradley T.; Pavone, Francesco S.

    2016-03-01

    Mapping neuronal activity patterns across the whole brain with cellular resolution is a challenging task for state-of-the-art imaging methods. Indeed, despite a number of technological efforts, quantitative cellular-resolution activation maps of the whole brain have not yet been obtained. Many techniques are limited by coarse resolution or by a narrow field of view. High-throughput imaging methods, such as light sheet microscopy, can be used to image large specimens with high resolution and in reasonable times. However, the bottleneck is then moved from image acquisition to image analysis, since many TeraBytes of data have to be processed to extract meaningful information. Here, we present a full experimental pipeline to quantify neuronal activity in the entire mouse brain with cellular resolution, based on a combination of genetics, optics and computer science. We used a transgenic mouse strain (Arc-dVenus mouse) in which neurons which have been active in the last hours before brain fixation are fluorescently labelled. Samples were cleared with CLARITY and imaged with a custom-made confocal light sheet microscope. To perform an automatic localization of fluorescent cells on the large images produced, we used a novel computational approach called semantic deconvolution. The combined approach presented here allows quantifying the amount of Arc-expressing neurons throughout the whole mouse brain. When applied to cohorts of mice subject to different stimuli and/or environmental conditions, this method helps finding correlations in activity between different neuronal populations, opening the possibility to infer a sort of brain-wide 'functional connectivity' with cellular resolution.

  7. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    PubMed

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  8. Pyroclast Tracking Velocimetry: A particle tracking velocimetry-based tool for the study of Strombolian explosive eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Moroni, Monica; Taddeucci, Jacopo; Scarlato, Piergiorgio; Shindler, Luca

    2014-07-01

    Image-based techniques enable high-resolution observation of the pyroclasts ejected during Strombolian explosions and drawing inferences on the dynamics of volcanic activity. However, data extraction from high-resolution videos is time consuming and operator dependent, while automatic analysis is often challenging due to the highly variable quality of images collected in the field. Here we present a new set of algorithms to automatically analyze image sequences of explosive eruptions: the pyroclast tracking velocimetry (PyTV) toolbox. First, a significant preprocessing is used to remove the image background and to detect the pyroclasts. Then, pyroclast tracking is achieved with a new particle tracking velocimetry algorithm, featuring an original predictor of velocity based on the optical flow equation. Finally, postprocessing corrects the systematic errors of measurements. Four high-speed videos of Strombolian explosions from Yasur and Stromboli volcanoes, representing various observation conditions, have been used to test the efficiency of the PyTV against manual analysis. In all cases, >106 pyroclasts have been successfully detected and tracked by PyTV, with a precision of 1 m/s for the velocity and 20% for the size of the pyroclast. On each video, more than 1000 tracks are several meters long, enabling us to study pyroclast properties and trajectories. Compared to manual tracking, 3 to 100 times more pyroclasts are analyzed. PyTV, by providing time-constrained information, links physical properties and motion of individual pyroclasts. It is a powerful tool for the study of explosive volcanic activity, as well as an ideal complement for other geological and geophysical volcano observation systems.

  9. Surface inspection system for carriage parts

    NASA Astrophysics Data System (ADS)

    Denkena, Berend; Acker, Wolfram

    2006-04-01

    Quality standards are very high in carriage manufacturing, due to the fact, that the visual quality impression is highly relevant for the purchase decision for the customer. In carriage parts even very small dents can be visible on the varnished and polished surface by observing reflections. The industrial demands are to detect these form errors on the unvarnished part. In order to meet the requirements, a stripe projection system for automatic recognition of waviness and form errors is introduced1. It bases on a modified stripe projection method using a high resolution line scan camera. Particular emphasis is put on achieving a short measuring time and a high resolution in depth, aiming at a reliable automatic recognition of dents and waviness of 10 μm on large curved surfaces of approximately 1 m width. The resulting point cloud needs to be filtered in order to detect dents. Therefore a spatial filtering technique is used. This works well on smoothly curved surfaces, if frequency parameters are well defined. On more complex parts like mudguards the method is restricted by the fact that frequencies near the define dent frequencies occur within the surface as well. To allow analysis of complex parts, the system is currently extended by including 3D CAD models into the process of inspection. For smoothly curved surfaces, the measuring speed of the prototype is mainly limited by the amount of light produced by the stripe projector. For complex surfaces the measuring speed is limited by the time consuming matching process. Currently, the development focuses on the improvement of the measuring speed.

  10. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  11. The IRM fluxgate magnetometer

    NASA Technical Reports Server (NTRS)

    Luehr, H.; Kloecker, N.; Oelschlaegel, W.; Haeusler, B.; Acuna, M.

    1985-01-01

    This report describes the three-axis fluxgate magnetometer instrument on board the AMPTE IRM spacecraft. Important features of the instrument are its wide dynamic range (0.1-60,000 nT), a high resolution (16-bit analog to digital conversion) and the capability to operate automatically or via telecommand in two gain states. In addition, the wave activity is monitored in all three components up to 50 Hz. Inflight checkout proved the nominal functioning of the instrument in all modes.

  12. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images.

    PubMed

    Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei

    2012-10-01

    To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.

  13. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images

    PubMed Central

    Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei

    2012-01-01

    Purpose: To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. Methods: The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors’ classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. Results: The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors’ automatic classification and manual segmentation were 91.6% ± 2.0%. Conclusions: A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution. PMID:23039675

  14. Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements

    NASA Astrophysics Data System (ADS)

    Jakub, Thomas D.

    Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.

  15. Localized contourlet features in vehicle make and model recognition

    NASA Astrophysics Data System (ADS)

    Zafar, I.; Edirisinghe, E. A.; Acar, B. S.

    2009-02-01

    Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic Number Plate Recognition (ANPR) systems. Several vehicle MMR systems have been proposed in literature. In parallel to this, the usefulness of multi-resolution based feature analysis techniques leading to efficient object classification algorithms have received close attention from the research community. To this effect, Contourlet transforms that can provide an efficient directional multi-resolution image representation has recently been introduced. Already an attempt has been made in literature to use Curvelet/Contourlet transforms in vehicle MMR. In this paper we propose a novel localized feature detection method in Contourlet transform domain that is capable of increasing the classification rates up to 4%, as compared to the previously proposed Contourlet based vehicle MMR approach in which the features are non-localized and thus results in sub-optimal classification. Further we show that the proposed algorithm can achieve the increased classification accuracy of 96% at significantly lower computational complexity due to the use of Two Dimensional Linear Discriminant Analysis (2DLDA) for dimensionality reduction by preserving the features with high between-class variance and low inter-class variance.

  16. Fractional-N phase-locked loop for split and direct automatic frequency control in A-GPS

    NASA Astrophysics Data System (ADS)

    Park, Chester Sungchung; Park, Sungkyung

    2018-07-01

    A low-power mixed-signal phase-locked loop (PLL) is modelled and designed for the DigRF interface between the RF chip and the modem chip. An assisted-GPS or A-GPS multi-standard system includes the DigRF interface and uses the split automatic frequency control (AFC) technique. The PLL circuitry uses the direct AFC technique and is based on the fractional-N architecture using a digital delta-sigma modulator along with a digital counter, fulfilling simple ultra-high-resolution AFC with robust digital circuitry and its timing. Relative to the output frequency, the measured AFC resolution or accuracy is <5 parts per billion (ppb) or on the order of a Hertz. The cycle-to-cycle rms jitter is <6 ps and the typical settling time is <30 μs. A spur reduction technique is adopted and implemented as well, demonstrating spur reduction without employing dithering. The proposed PLL includes a low-leakage phase-frequency detector, a low-drop-out regulator, power-on-reset circuitry and precharge circuitry. The PLL is implemented in a 90-nm CMOS process technology with 1.2 V single supply. The overall PLL draws about 1.1 mA from the supply.

  17. Experiences with semiautomatic aerotriangulation on digital photogrammetric stations

    NASA Astrophysics Data System (ADS)

    Kersten, Thomas P.; Stallmann, Dirk

    1995-12-01

    With the development of higher-resolution scanners, faster image-handling capabilities, and higher-resolution screens, digital photogrammetric workstations promise to rival conventional analytical plotters in functionality, i.e. in the degree of automation in data capture and processing, and in accuracy. The availability of high quality digital image data and inexpensive high capacity fast mass storage offers the capability to perform accurate semi- automatic or automatic triangulation of digital aerial photo blocks on digital photogrammetric workstations instead of analytical plotters. In this paper, we present our investigations and results on two photogrammetric triangulation blocks, the OEEPE (European Organisation for Experimental Photogrammetric Research) test block (scale 1;4'000) and a Swiss test block (scale 1:12'000) using digitized images. Twenty-eight images of the OEEPE test block were scanned on the Zeiss/Intergraph PS1 and the digital images were delivered with a resolution of 15 micrometer and 30 micrometer, while 20 images of the Swiss test block were scanned on the Desktop Publishing Scanner Agfa Horizon with a resolution of 42 micrometer and on the PS1 with 15 micrometer. Measurements in the digital images were performed on the commercial Digital photogrammetric Station Leica/Helava DPW770 and with basic hard- and software components of the Digital Photogrammetric Station DIPS II, an experimental system of the Institute of Geodesy and Photogrammetry, ETH Zurich. As a reference, the analog images of both photogrammetric test blocks were measured at analytical plotters. On DIPS II measurements of fiducial marks, signalized and natural tie points were performed by least squares template and image matching, while on DPW770 all points were measured by the cross correlation technique. The observations were adjusted in a self-calibrating bundle adjustment. The comparisons between these results and the experiences with the functionality of the commercial and the experimental system are presented.

  18. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference between SWOT observations and modeled WSE using a perturbed set of parameters. Different formulations of the objective function were used, especially to account for SWOT observation errors, as well as various sets of calibration parameters.

  19. Enhanced FIB-SEM systems for large-volume 3D imaging

    PubMed Central

    Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F

    2017-01-01

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 106 µm3. These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology. DOI: http://dx.doi.org/10.7554/eLife.25916.001 PMID:28500755

  20. Enhanced FIB-SEM systems for large-volume 3D imaging

    DOE PAGES

    Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan; ...

    2017-05-13

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less

  1. Detection performance in clutter with variable resolution

    NASA Astrophysics Data System (ADS)

    Schmieder, D. E.; Weathersby, M. R.

    1983-07-01

    Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 line pairs per target (LP/TGT), while at the higher SCRs it was found that a resoluton of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.

  2. An Experiment Quantifying The Effect Of Clutter On Target Detection

    NASA Astrophysics Data System (ADS)

    Weathersby, Marshall R.; Schmieder, David E.

    1985-01-01

    Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 lines pairs per target (LP/TGT), while at the higher SCRs it was found that a resolution of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.

  3. Automatic photointerpretation for land use management in Minnesota

    NASA Technical Reports Server (NTRS)

    Swanlund, G. D. (Principal Investigator); Kirvida, L.; Cheung, M.; Pile, D.; Zirkle, R.

    1974-01-01

    The author has identified the following significant results. Automatic photointerpretation techniques were utilized to evaluate the feasibility of data for land use management. It was shown that ERTS-1 MSS data can produce thematic maps of adequate resolution and accuracy to update land use maps. In particular, five typical land use areas were mapped with classification accuracies ranging from 77% to over 90%.

  4. High resolution, MRI-based, segmented, computerized head phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubal, I.G.; Harrell, C.R.; Smith, E.O.

    1999-01-01

    The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less

  5. Evaluating RGB photogrammetry and multi-temporal digital surface models for detecting soil erosion

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Keesstra, Saskia; Seeger, Manuel

    2013-04-01

    Photogrammetry is a widely used tool for generating high-resolution digital surface models. Unmanned Aerial Vehicles (UAVs), equipped with a Red Green Blue (RGB) camera, have great potential in quickly acquiring multi-temporal high-resolution orthophotos and surface models. Such datasets would ease the monitoring of geomorphological processes, such as local soil erosion and rill formation after heavy rainfall events. In this study we test a photogrammetric setup to determine data requirements for soil erosion studies with UAVs. We used a rainfall simulator (5 m2) and above a rig with attached a Panasonic GX1 16 megapixel digital camera and 20mm lens. The soil material in the simulator consisted of loamy sand at an angle of 5 degrees. Stereo pair images were taken before and after rainfall simulation with 75-85% overlap. Acquired images were automatically mosaicked to create high-resolution orthorectified images and digital surface models (DSM). We resampled the DSM to different spatial resolutions to analyze the effect of cell size to the accuracy of measured rill depth and soil loss estimations, and determined an optimal cell size (thus flight altitude). Furthermore, the high spatial accuracy of the acquired surface models allows further analysis of rill formation and channel initiation related to e.g. surface roughness. We suggest implementing near-infrared and temperature sensors to combine soil moisture and soil physical properties with surface morphology for future investigations.

  6. RCrane: semi-automated RNA model building.

    PubMed

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  7. Analysis system of submicron particle tracks in the fine-grained nuclear emulsion by a combination of hard x-ray and optical microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naka, T., E-mail: naka@flab.phys.nagoya-u.ac.jp; Institute for Advanced Research, Nagoya University, Aichi 464-8602; Asada, T.

    Analyses of nuclear emulsion detectors that can detect and identify charged particles or radiation as tracks have typically utilized optical microscope systems because the targets have lengths from several μm to more than 1000 μm. For recent new nuclear emulsion detectors that can detect tracks of submicron length or less, the current readout systems are insufficient due to their poor resolution. In this study, we developed a new system and method using an optical microscope system for rough candidate selection and the hard X-ray microscope system at SPring-8 for high-precision analysis with a resolution of better than 70 nm resolution.more » Furthermore, we demonstrated the analysis of submicron-length tracks with a matching efficiency of more than 99% and position accuracy of better than 5 μm. This system is now running semi-automatically.« less

  8. Automated analysis for microcalcifications in high resolution digital mammograms

    DOEpatents

    Mascio, Laura N.

    1996-01-01

    A method for automatically locating microcalcifications indicating breast cancer. The invention assists mammographers in finding very subtle microcalcifications and in recognizing the pattern formed by all the microcalcifications. It also draws attention to microcalcifications that might be overlooked because a more prominent feature draws attention away from an important object. A new filter has been designed to weed out false positives in one of the steps of the method. Previously, iterative selection threshold was used to separate microcalcifications from the spurious signals resulting from texture or other background. A Selective Erosion or Enhancement (SEE) Filter has been invented to improve this step. Since the algorithm detects areas containing potential calcifications on the mammogram, it can be used to determine which areas need be stored at the highest resolution available, while, in addition, the full mammogram can be reduced to an appropriate resolution for the remaining cancer signs.

  9. Automated analysis for microcalcifications in high resolution digital mammograms

    DOEpatents

    Mascio, L.N.

    1996-12-17

    A method is disclosed for automatically locating microcalcifications indicating breast cancer. The invention assists mammographers in finding very subtle microcalcifications and in recognizing the pattern formed by all the microcalcifications. It also draws attention to microcalcifications that might be overlooked because a more prominent feature draws attention away from an important object. A new filter has been designed to weed out false positives in one of the steps of the method. Previously, iterative selection threshold was used to separate microcalcifications from the spurious signals resulting from texture or other background. A Selective Erosion or Enhancement (SEE) Filter has been invented to improve this step. Since the algorithm detects areas containing potential calcifications on the mammogram, it can be used to determine which areas need be stored at the highest resolution available, while, in addition, the full mammogram can be reduced to an appropriate resolution for the remaining cancer signs. 8 figs.

  10. On controlling nonlinear dissipation in high order filter methods for ideal and non-ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjogreen, B.

    2004-01-01

    The newly developed adaptive numerical dissipation control in spatially high order filter schemes for the compressible Euler and Navier-Stokes equations has been recently extended to the ideal and non-ideal magnetohydrodynamics (MHD) equations. These filter schemes are applicable to complex unsteady MHD high-speed shock/shear/turbulence problems. They also provide a natural and efficient way for the minimization of Div(B) numerical error. The adaptive numerical dissipation mechanism consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. The numerical dissipation considered consists of high order linear dissipation for the suppression of high frequency oscillation and the nonlinear dissipative portion of high-resolution shock-capturing methods for discontinuity capturing. The applicable nonlinear dissipative portion of high-resolution shock-capturing methods is very general. The objective of this paper is to investigate the performance of three commonly used types of nonlinear numerical dissipation for both the ideal and non-ideal MHD.

  11. Effortful versus automatic emotional processing in schizophrenia: Insights from a face-vignette task.

    PubMed

    Patrick, Regan E; Rastogi, Anuj; Christensen, Bruce K

    2015-01-01

    Adaptive emotional responding relies on dual automatic and effortful processing streams. Dual-stream models of schizophrenia (SCZ) posit a selective deficit in neural circuits that govern goal-directed, effortful processes versus reactive, automatic processes. This imbalance suggests that when patients are confronted with competing automatic and effortful emotional response cues, they will exhibit diminished effortful responding and intact, possibly elevated, automatic responding compared to controls. This prediction was evaluated using a modified version of the face-vignette task (FVT). Participants viewed emotional faces (automatic response cue) paired with vignettes (effortful response cue) that signalled a different emotion category and were instructed to discriminate the manifest emotion. Patients made less vignette and more face responses than controls. However, the relationship between group and FVT responding was moderated by IQ and reading comprehension ability. These results replicate and extend previous research and provide tentative support for abnormal conflict resolution between automatic and effortful emotional processing predicted by dual-stream models of SCZ.

  12. Optical Fiber On-Line Detection System for Non-Touch Monitoring Roller Shape

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Wang, Y. T.

    2006-10-01

    Basing on the principle of reflective displacement fiber-optic sensor, a high accuracy non-touch on-line optical fiber measurement system for roller shape is presented. The principle and composition of the detection system and the operation process are expatiated also. By using a novel probe of three optical fibers in equal transverse space, the effects of fluctuations in the light source, reflective changing of target surface and the intensity losses in the fiber lines are automatically compensated. Meantime, an optical fiber sensor model of correcting static error based on BP artificial neural network (ANN) is set up. Also by using interpolation method and value filtering to process the signals, effectively reduce the influence of random noise and the vibration of the roller bearing. So enhance the accuracy and resolution remarkably. Experiment proves that the accuracy of the system reach to the demand of practical production process, it provides a new method for the high speed, accurate and automatic on line detection of the mill roller shape.

  13. Glacier Frontal Line Extraction from SENTINEL-1 SAR Imagery in Prydz Area

    NASA Astrophysics Data System (ADS)

    Li, F.; Wang, Z.; Zhang, S.; Zhang, Y.

    2018-04-01

    Synthetic Aperture Radar (SAR) can provide all-day and all-night observation of the earth in all-weather conditions with high resolution, and it is widely used in polar research including sea ice, sea shelf, as well as the glaciers. For glaciers monitoring, the frontal position of a calving glacier at different moments of time is of great importance, which indicates the estimation of the calving rate and flux of the glaciers. In this abstract, an automatic algorithm for glacier frontal extraction using time series Sentinel-1 SAR imagery is proposed. The technique transforms the amplitude imagery of Sentinel-1 SAR into a binary map using SO-CFAR method, and then frontal points are extracted using profile method which reduces the 2D binary map to 1D binary profiles, the final frontal position of a calving glacier is the optimal profile selected from the different average segmented profiles. The experiment proves that the detection algorithm for SAR data can automatically extract the frontal position of glacier with high efficiency.

  14. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    PubMed

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  15. Automatic Lung Lobe Segmentation Using Particles, Thin Plate Splines, and Maximum a Posteriori Estimation

    PubMed Central

    Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.

    2011-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396

  16. A high-resolution oxygen A-band spectrometer (HABS) and its radiation closure

    NASA Astrophysics Data System (ADS)

    Min, Q.; Yin, B.; Li, S.; Berndt, J.; Harrison, L.; Joseph, E.; Duan, M.; Kiedron, P.

    2014-02-01

    The pressure dependence of oxygen A-band absorption enables the retrieval of the vertical profiles of aerosol and cloud properties from oxygen A-band spectrometry. To improve the understanding of oxygen A-band inversions and utility, we developed a high-resolution oxygen A-band spectrometer (HABS), and deployed it at Howard University Beltsville site during the NASA Discover Air-Quality Field Campaign in July 2011. The HABS has the ability to measure solar direct-beam and zenith diffuse radiation through a telescope automatically. It exhibits excellent performance: stable spectral response ratio, high signal-to-noise ratio (SNR), high spectrum resolution (0.16 nm), and high Out-of-Band Rejection (10-5). To evaluate the spectra performance of HABS, a HABS simulator has been developed by combing the discrete ordinates radiative transfer (DISORT) code with the High Resolution Transmission (HTRAN) database HITRAN2008. The simulator uses double-k approach to reduce the computational cost. The HABS measured spectra are consistent with the related simulated spectra. For direct-beam spectra, the confidence intervals (95%) of relative difference between measurements and simulation are (-0.06, 0.05) and (-0.08, 0.09) for solar zenith angles of 27° and 72°, respectively. The main differences between them occur at or near the strong oxygen absorption line centers. They are mainly caused by the noise/spikes of HABS measured spectra, as a result of combined effects of weak signal, low SNR, and errors in wavelength registration and absorption line parameters. The high-resolution oxygen A-band measurements from HABS can constrain the active radar retrievals for more accurate cloud optical properties, particularly for multi-layer clouds and for mixed-phase clouds.

  17. [Status of cardiorespiratory polysomnographic diagnosis in the sleep laboratory].

    PubMed

    Penzel, T

    1995-03-01

    The different types of sleep related breathing and cardiovascular disorders are well known and defined nowadays. Thereby it is possible to present a configuration by which a cardiorespiratory sleep laboratory is enabled to perform a complete differential diagnosis. This configuration consists of the function sleep with EEG, EOG and EMG, the function respiration with respiratory effort, respiratory flow and oxygen saturation, and the cardiovascular function with ECG and blood pressure if indicated. Continuous monitoring by videocamera and a patient call system with a technician present during the entire recording time must be assured. Recording and evaluation of all signals can be done with chart polygraphs or with computer systems if they provide a high-resolution graphic monitor. Automatic sleep analysis systems support evaluation of polysomnograms. But automatic analysis of sleep stages as well as automatic analysis of respiratory disorders needs visual counterchecking before results can be accepted. On the basis of today's knowledge recommendation for the setting of a sleep laboratory were set and new sleep labs are controlled on a voluntary basis by a commission of the German society for sleep research and sleep medicine. This first step of quality control is introduced to establish a procedure to keep quality of diagnosis and treatment on a high level in this medical specialty.

  18. Automatic detection of Martian dark slope streaks by machine learning using HiRISE images

    NASA Astrophysics Data System (ADS)

    Wang, Yexin; Di, Kaichang; Xin, Xin; Wan, Wenhui

    2017-07-01

    Dark slope streaks (DSSs) on the Martian surface are one of the active geologic features that can be observed on Mars nowadays. The detection of DSS is a prerequisite for studying its appearance, morphology, and distribution to reveal its underlying geological mechanisms. In addition, increasingly massive amounts of Mars high resolution data are now available. Hence, an automatic detection method for locating DSSs is highly desirable. In this research, we present an automatic DSS detection method by combining interest region extraction and machine learning techniques. The interest region extraction combines gradient and regional grayscale information. Moreover, a novel recognition strategy is proposed that takes the normalized minimum bounding rectangles (MBRs) of the extracted regions to calculate the Local Binary Pattern (LBP) feature and train a DSS classifier using the Adaboost machine learning algorithm. Comparative experiments using five different feature descriptors and three different machine learning algorithms show the superiority of the proposed method. Experimental results utilizing 888 extracted region samples from 28 HiRISE images show that the overall detection accuracy of our proposed method is 92.4%, with a true positive rate of 79.1% and false positive rate of 3.7%, which in particular indicates great performance of the method at eliminating non-DSS regions.

  19. Automatic quantification framework to detect cracks in teeth

    PubMed Central

    Shah, Hina; Hernandez, Pablo; Budin, Francois; Chittajallu, Deepak; Vimort, Jean-Baptiste; Walters, Rick; Mol, André; Khan, Asma; Paniagua, Beatriz

    2018-01-01

    Studies show that cracked teeth are the third most common cause for tooth loss in industrialized countries. If detected early and accurately, patients can retain their teeth for a longer time. Most cracks are not detected early because of the discontinuous symptoms and lack of good diagnostic tools. Currently used imaging modalities like Cone Beam Computed Tomography (CBCT) and intraoral radiography often have low sensitivity and do not show cracks clearly. This paper introduces a novel method that can detect, quantify, and localize cracks automatically in high resolution CBCT (hr-CBCT) scans of teeth using steerable wavelets and learning methods. These initial results were created using hr-CBCT scans of a set of healthy teeth and of teeth with simulated longitudinal cracks. The cracks were simulated using multiple orientations. The crack detection was trained on the most significant wavelet coefficients at each scale using a bagged classifier of Support Vector Machines. Our results show high discriminative specificity and sensitivity of this method. The framework aims to be automatic, reproducible, and open-source. Future work will focus on the clinical validation of the proposed techniques on different types of cracks ex-vivo. We believe that this work will ultimately lead to improved tracking and detection of cracks allowing for longer lasting healthy teeth. PMID:29769755

  20. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.

  1. Land cover classification of VHR airborne images for citrus grove identification

    NASA Astrophysics Data System (ADS)

    Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J. Z.; Camps Valls, G.; Calpe Maravilla, J.

    Managing land resources using remote sensing techniques is becoming a common practice. However, data analysis procedures should satisfy the high accuracy levels demanded by users (public or private companies and governments) in order to be extensively used. This paper presents a multi-stage classification scheme to update the citrus Geographical Information System (GIS) of the Comunidad Valenciana region (Spain). Spain is the first citrus fruit producer in Europe and the fourth in the world. In particular, citrus fruits represent 67% of the agricultural production in this region, with a total production of 4.24 million tons (campaign 2006-2007). The citrus GIS inventory, created in 2001, needs to be regularly updated in order to monitor changes quickly enough, and allow appropriate policy making and citrus production forecasting. Automatic methods are proposed in this work to facilitate this update, whose processing scheme is summarized as follows. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution aerial images (0.5 m). Next, several automatic classifiers (decision trees, artificial neural networks, and support vector machines) are trained and combined to improve the final classification accuracy. Finally, the citrus GIS is automatically updated if a high enough level of confidence, based on the agreement between classifiers, is achieved. This is the case for 85% of the parcels and accuracy results exceed 94%. The remaining parcels are classified by expert photo-interpreters in order to guarantee the high accuracy demanded by policy makers.

  2. Automatic and hierarchical segmentation of the human skeleton in CT images.

    PubMed

    Fu, Yabo; Liu, Shi; Li, Harold; Yang, Deshan

    2017-04-07

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic vertebrae, and 1.45 mm for pelvis bones.

  3. Automatic and hierarchical segmentation of the human skeleton in CT images

    NASA Astrophysics Data System (ADS)

    Fu, Yabo; Liu, Shi; Li, H. Harold; Yang, Deshan

    2017-04-01

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic vertebrae, and 1.45 mm for pelvis bones.

  4. DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.

    PubMed

    Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A

    2017-01-01

    Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.

  5. Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture

    PubMed Central

    Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz

    2014-01-01

    We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178

  6. Assessment of automatic ligand building in ARP/wARP.

    PubMed

    Evrard, Guillaume X; Langer, Gerrit G; Perrakis, Anastassis; Lamzin, Victor S

    2007-01-01

    The efficiency of the ligand-building module of ARP/wARP version 6.1 has been assessed through extensive tests on a large variety of protein-ligand complexes from the PDB, as available from the Uppsala Electron Density Server. Ligand building in ARP/wARP involves two main steps: automatic identification of the location of the ligand and the actual construction of its atomic model. The first step is most successful for large ligands. The second step, ligand construction, is more powerful with X-ray data at high resolution and ligands of small to medium size. Both steps are successful for ligands with low to moderate atomic displacement parameters. The results highlight the strengths and weaknesses of both the method of ligand building and the large-scale validation procedure and help to identify means of further improvement.

  7. Measurement of the edge plasma rotation on J-TEXT tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Z. F.; Luo, J.; Wang, Z. J.

    2013-07-15

    A multi-channel high resolution spectrometer was developed for the measurement of the edge plasma rotation on J-TEXT tokamak. With the design of two opposite viewing directions, the poloidal and toroidal rotations can be measured simultaneously, and velocity accuracy is up to 1 km/s. The photon flux was enhanced by utilizing combined optical fiber. With this design, the time resolution reaches 3 ms. An assistant software “Spectra Assist” was developed for implementing the spectrometer control and data analysis automatically. A multi-channel monochromatic analyzer is designed to get the location of chosen ions simultaneously through the inversion analysis. Some preliminary experimental resultsmore » about influence of plasma density, different magnetohydrodynamics behaviors, and applying of biased electrode are presented.« less

  8. A new Lagrangian method for three-dimensional steady supersonic flows

    NASA Technical Reports Server (NTRS)

    Loh, Ching-Yuen; Liou, Meng-Sing

    1993-01-01

    In this report, the new Lagrangian method introduced by Loh and Hui is extended for three-dimensional, steady supersonic flow computation. The derivation of the conservation form and the solution of the local Riemann solver using the Godunov and the high-resolution TVD (total variation diminished) scheme is presented. This new approach is accurate and robust, capable of handling complicated geometry and interactions between discontinuous waves. Test problems show that the extended Lagrangian method retains all the advantages of the two-dimensional method (e.g., crisp resolution of a slip-surface (contact discontinuity) and automatic grid generation). In this report, we also suggest a novel three dimensional Riemann problem in which interesting and intricate flow features are present.

  9. Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology

    NASA Astrophysics Data System (ADS)

    Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim

    2016-09-01

    Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.

  10. Tomographic brain imaging with nucleolar detail and automatic cell counting

    NASA Astrophysics Data System (ADS)

    Hieber, Simone E.; Bikis, Christos; Khimchenko, Anna; Schweighauser, Gabriel; Hench, Jürgen; Chicherova, Natalia; Schulz, Georg; Müller, Bert

    2016-09-01

    Brain tissue evaluation is essential for gaining in-depth insight into its diseases and disorders. Imaging the human brain in three dimensions has always been a challenge on the cell level. In vivo methods lack spatial resolution, and optical microscopy has a limited penetration depth. Herein, we show that hard X-ray phase tomography can visualise a volume of up to 43 mm3 of human post mortem or biopsy brain samples, by demonstrating the method on the cerebellum. We automatically identified 5,000 Purkinje cells with an error of less than 5% at their layer and determined the local surface density to 165 cells per mm2 on average. Moreover, we highlight that three-dimensional data allows for the segmentation of sub-cellular structures, including dendritic tree and Purkinje cell nucleoli, without dedicated staining. The method suggests that automatic cell feature quantification of human tissues is feasible in phase tomograms obtained with isotropic resolution in a label-free manner.

  11. Automatic and objective oral cancer diagnosis by Raman spectroscopic detection of keratin with multivariate curve resolution analysis

    NASA Astrophysics Data System (ADS)

    Chen, Po-Hsiung; Shimada, Rintaro; Yabumoto, Sohshi; Okajima, Hajime; Ando, Masahiro; Chang, Chiou-Tzu; Lee, Li-Tzu; Wong, Yong-Kie; Chiou, Arthur; Hamaguchi, Hiro-O.

    2016-01-01

    We have developed an automatic and objective method for detecting human oral squamous cell carcinoma (OSCC) tissues with Raman microspectroscopy. We measure 196 independent Raman spectra from 196 different points of one oral tissue sample and globally analyze these spectra using a Multivariate Curve Resolution (MCR) analysis. Discrimination of OSCC tissues is automatically and objectively made by spectral matching comparison of the MCR decomposed Raman spectra and the standard Raman spectrum of keratin, a well-established molecular marker of OSCC. We use a total of 24 tissue samples, 10 OSCC and 10 normal tissues from the same 10 patients, 3 OSCC and 1 normal tissues from different patients. Following the newly developed protocol presented here, we have been able to detect OSCC tissues with 77 to 92% sensitivity (depending on how to define positivity) and 100% specificity. The present approach lends itself to a reliable clinical diagnosis of OSCC substantiated by the “molecular fingerprint” of keratin.

  12. Automatic energy calibration algorithm for an RBS setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala

    2013-05-06

    This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less

  13. DLA based compressed sensing for high resolution MR microscopy of neuronal tissue

    NASA Astrophysics Data System (ADS)

    Nguyen, Khieu-Van; Li, Jing-Rebecca; Radecki, Guillaume; Ciobanu, Luisa

    2015-10-01

    In this work we present the implementation of compressed sensing (CS) on a high field preclinical scanner (17.2 T) using an undersampling trajectory based on the diffusion limited aggregation (DLA) random growth model. When applied to a library of images this approach performs better than the traditional undersampling based on the polynomial probability density function. In addition, we show that the method is applicable to imaging live neuronal tissues, allowing significantly shorter acquisition times while maintaining the image quality necessary for identifying the majority of neurons via an automatic cell segmentation algorithm.

  14. Oil Spill Disasters Detection and Monitoring by RST Analysis of Optical Satellite Radiances: the Case of Deepwater Horizon Platform in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Pergola, N.; Grimaldi, S. C.; Coviello, I.; Faruolo, M.; Lacava, T.; Tramutoli, V.

    2010-12-01

    Marine oil spill disasters may have devastating effects on the marine and coastal environment. For monitoring and mitigation purposes, timely detection and continuously updated information on polluted areas are required. Satellite remote sensing can give a significant contribution in such a direction. Nowadays, SAR (Synthetic Aperture Radar) technology has been recognized as the most efficient for oil spill detection and mapping, thanks to the high spatial resolution and all-time/all-weather capability of the present operational sensors. Anyway, the present SARs revisiting time does not allow for a rapid detection and a near real-time monitoring of these phenomena at global scale. Passive optical sensors, on board meteorological satellites, thanks to their high temporal resolution (from a few hours to 15 minutes, depending on the characteristics of the platform/sensor), may represent, at this moment, a suitable SAR alternative/complement for oil spill detection and monitoring. Up to now, some techniques, based on optical satellite data, have been proposed for “a posteriori” mapping of already known oil spill discharges. On the other hand, reliable satellite methods for an automatic and timely detection of oil spills, for surveillance and warning purposes, are still currently missing. Recently, an innovative technique for automatic and near real time oil spill detection and monitoring has been proposed. The technique is based on the general RST (Robust Satellite Technique) approach which exploits multi-temporal satellite records in order to obtain a former characterization of the measured signal, in terms of expected value and natural variability, providing a further identification of signal anomalies by an automatic, unsupervised change detection step. Results obtained by using AVHRR (Advanced Very High Resolution Radiometer) Thermal Infrared data, in different geographic areas and observational conditions, demonstrated excellent detection capabilities both in term of sensitivity (to the presence even of thin/old oil films) and reliability (up to zero occurrence of false alarms), mainly due to the RST invariance regardless of local and environmental conditions. Exploiting its complete independence on the specific satellite platform, RST approach has been successfully exported to the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua satellites. In this paper, results obtained applying the proposed methodology to the recent oil spill disaster of Deepwater Horizon Platform in the gulf of Mexico, that discharged over 5 million barrels (550 million litres) in the ocean, will be shown. A dense temporal series of RST-based oil spill maps, obtained by using MODIS TIR records, are commented, emphasizing and discussing main peculiarities and specific characteristics of this event. Preliminary findings, possible residual limits and future perspectives will be also presented and discussed.

  15. Towards automatic SAR-optical stereogrammetry over urban areas using very high resolution imagery

    NASA Astrophysics Data System (ADS)

    Qiu, Chunping; Schmitt, Michael; Zhu, Xiao Xiang

    2018-04-01

    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established hand-crafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging.

  16. Towards automatic SAR-optical stereogrammetry over urban areas using very high resolution imagery.

    PubMed

    Qiu, Chunping; Schmitt, Michael; Zhu, Xiao Xiang

    2018-04-01

    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established hand-crafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging.

  17. Towards real-time metabolic profiling of a biopsy specimen during a surgical operation by 1H high resolution magic angle spinning nuclear magnetic resonance: a case report.

    PubMed

    Piotto, Martial; Moussallieh, François-Marie; Neuville, Agnès; Bellocq, Jean-Pierre; Elbayed, Karim; Namer, Izzie Jacques

    2012-01-18

    Providing information on cancerous tissue samples during a surgical operation can help surgeons delineate the limits of a tumoral invasion more reliably. Here, we describe the use of metabolic profiling of a colon biopsy specimen by high resolution magic angle spinning nuclear magnetic resonance spectroscopy to evaluate tumoral invasion during a simulated surgical operation. Biopsy specimens (n = 9) originating from the excised right colon of a 66-year-old Caucasian women with an adenocarcinoma were automatically analyzed using a previously built statistical model. Metabolic profiling results were in full agreement with those of a histopathological analysis. The time-response of the technique is sufficiently fast for it to be used effectively during a real operation (17 min/sample). Metabolic profiling has the potential to become a method to rapidly characterize cancerous biopsies in the operation theater.

  18. New ultra-high resolution dye laser spectrometer utilizing a non-tunable reference resonator

    NASA Astrophysics Data System (ADS)

    Helmcke, J.; Snyder, J. J.; Morinaga, A.; Mensing, F.; Gläser, M.

    1987-06-01

    A new dye laser spectrometer utilizing a non-tunable reference resonator is described. The resonator consists of two Zerodur mirrors optically contacted to a Zerodur spacer. Frequency scanning of the laser is provided by acoustooptic modulation. Residual drifts of the resonator frequency — measured on line — are compensated automatically by corresponding corrections of the modulation frequency. The stability during several hours and the resettability of the dye laser frequency are±2.5 kHz and±10 kHz, respectively.

  19. NMR reaction monitoring in flow synthesis

    PubMed Central

    Gomez, M Victoria

    2017-01-01

    Recent advances in the use of flow chemistry with in-line and on-line analysis by NMR are presented. The use of macro- and microreactors, coupled with standard and custom made NMR probes involving microcoils, incorporated into high resolution and benchtop NMR instruments is reviewed. Some recent selected applications have been collected, including synthetic applications, the determination of the kinetic and thermodynamic parameters and reaction optimization, even in single experiments and on the μL scale. Finally, software that allows automatic reaction monitoring and optimization is discussed. PMID:28326137

  20. Analysis of High Spatial, Temporal, and Directional Resolution Recordings of Biological Sounds in the Southern California Bight

    DTIC Science & Technology

    2014-09-30

    was used to scan the 1999 data set for biologically-created transient signals. Unfortunately, no humpback whale calls were found in the data set...to automatically scan the data for humpback whale and other biological sounds. Finally, the analyses also have used data from the CalCOFI program at...Hildebrand (2012). “A generalized power-law detection algorithm for humpback whale vocalizations,” J. Acoust. Soc. Am. 131(4), pp. 2682-2699. Lombard, E

  1. NMR reaction monitoring in flow synthesis.

    PubMed

    Gomez, M Victoria; de la Hoz, Antonio

    2017-01-01

    Recent advances in the use of flow chemistry with in-line and on-line analysis by NMR are presented. The use of macro- and microreactors, coupled with standard and custom made NMR probes involving microcoils, incorporated into high resolution and benchtop NMR instruments is reviewed. Some recent selected applications have been collected, including synthetic applications, the determination of the kinetic and thermodynamic parameters and reaction optimization, even in single experiments and on the μL scale. Finally, software that allows automatic reaction monitoring and optimization is discussed.

  2. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After having tested and refined the image analysis processing for some typical images, we have recorded a macro with ImageJ-Fiji allowing to process all the images for a given DOM. As a result, the three different types of rocks can be semi-automatically mapped on large DOMs using a simple and efficient procedure. This allows to develop quantitative analyses of fault rock distribution and thickness, fault trace roughness/curvature and length, fault zone architecture, and alteration halos due to hydrothermal fluid-rock interaction. To improve our workflow, additional or different morphological operators could be integrated in our procedure to yield a better resolution on small and thin pseudotachylyte veins (e.g. perimeter/area ratio).

  3. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  4. Intra-operative adjustment of standard planes in C-arm CT image data.

    PubMed

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  5. Permanent 3D laser scanning system for an active landslide in Gresten (Austria)

    NASA Astrophysics Data System (ADS)

    Canli, Ekrem; Höfle, Bernhard; Hämmerle, Martin; Benni, Thiebes; Glade, Thomas

    2015-04-01

    Terrestrial laser scanners (TLS) have widely been used for high spatial resolution data acquisition of topographic features and geomorphic analyses. Existing applications encompass different landslides including rockfall, translational or rotational landslides, debris flow, but also coastal cliff erosion, braided river evolution or river bank erosion. The main advantages of TLS are (a) the high spatial sampling density of XYZ-measurements (e.g. 1 point every 2-3 mm at 10 m distance), particularly in comparison with the low data density monitoring techniques such as GNSS or total stations, (b) the millimeter accuracy and precision of the range measurement to centimeter accuracy of the final DEM, and (c) the highly dense area-wide scanning that enables to look through vegetation and to measure bare ground. One of its main constraints is the temporal resolution of acquired data due to labor costs and time requirements for field campaigns. Thus, repetition measurements are generally performed only episodically. However, for an increased scientific understanding of the processes as well as for early warning purposes, we present a novel permanent 3D monitoring setup to increase the temporal resolution of TLS measurements. This accounts for different potential monitoring deliverables such as volumetric calculations, spatio-temporal movement patterns, predictions and even alerting. This system was installed at the active Salcher landslide in Gresten (Austria) that is situated in the transition zone of the Gresten Klippenbelt (Helvetic) and the Flyschzone (Penninic). The characteristic lithofacies are the Gresten Beds of Early Jurassic age that are covered by a sequence of marly and silty beds with intercalated sandy limestones. Permanent data acquisition can be implemented into our workflow with any long-range TLS system offering fully automated capturing. We utilize an Optech ILRIS-3D scanner. The time interval between two scans is currently set to 24 hours, but can be set as low as a full scan requires. The field of view (FoV) from the fixed scanner position covers most of the active landslide surface (with a maximum distance of 300 m). To initiate the scan acquisition, command line tools are run automatically on an attached notebook computer in the given time interval. The acquired 3D point cloud (including signal intensity recordings) are then sent to a server via automatic internet transfer. Each new point cloud is automatically compared with an initial 'zero' survey. Furthermore, highly detailed reference surveys are performed several times per year with the most recent Riegl VZ-6000 scanner from multiple scan positions in order to provide high quality independent ground truth. The change detection is carried out by fully automatic batch processing without the need for manual interaction. One of the applied change detection approaches is the M3C2 algorithm (Multiscale Model to Model Cloud Comparison) which is available as open source software. The field site in Gresten also contains different other monitoring systems such as inclinometers and piezometers that complement in the interpretation of the obtained TLS data. Future analysis will include the combination of surface movement with subsurface hydrology as well as with climatic data obtained from an on-site climatic station.

  6. Semantic labeling of high-resolution aerial images using an ensemble of fully convolutional networks

    NASA Astrophysics Data System (ADS)

    Sun, Xiaofeng; Shen, Shuhan; Lin, Xiangguo; Hu, Zhanyi

    2017-10-01

    High-resolution remote sensing data classification has been a challenging and promising research topic in the community of remote sensing. In recent years, with the rapid advances of deep learning, remarkable progress has been made in this field, which facilitates a transition from hand-crafted features designing to an automatic end-to-end learning. A deep fully convolutional networks (FCNs) based ensemble learning method is proposed to label the high-resolution aerial images. To fully tap the potentials of FCNs, both the Visual Geometry Group network and a deeper residual network, ResNet, are employed. Furthermore, to enlarge training samples with diversity and gain better generalization, in addition to the commonly used data augmentation methods (e.g., rotation, multiscale, and aspect ratio) in the literature, aerial images from other datasets are also collected for cross-scene learning. Finally, we combine these learned models to form an effective FCN ensemble and refine the results using a fully connected conditional random field graph model. Experiments on the ISPRS 2-D Semantic Labeling Contest dataset show that our proposed end-to-end classification method achieves an overall accuracy of 90.7%, a state-of-the-art in the field.

  7. Non-targeted workflow for identification of antimicrobial compounds in animal feed using bioassay-directed screening in combination with liquid chromatography-high resolution mass spectrometry.

    PubMed

    Wegh, Robin S; Berendsen, Bjorn J A; Driessen-Van Lankveld, Wilma D M; Pikkemaat, Mariël G; Zuidema, Tina; Van Ginkel, Leen A

    2017-11-01

    A non-targeted workflow is reported for the isolation and identification of antimicrobial active compounds using bioassay-directed screening and LC coupled to high-resolution MS. Suspect samples are extracted using a generic protocol and fractionated using two different LC conditions (A and B). The behaviour of the bioactive compound under these different conditions yields information about the physicochemical properties of the compound and introduces variations in co-eluting compounds in the fractions, which is essential for peak picking and identification. The fractions containing the active compound(s) obtained with conditions A and B are selected using a microbiological effect-based bioassay. The selected bioactive fractions from A and B are analysed using LC combined with high-resolution MS. Selection of relevant signals is automatically carried out by selecting all signals present in both bioactive fractions A and B, yielding tremendous data reduction. The method was assessed using two spiked feed samples and subsequently applied to two feed samples containing an unidentified compound showing microbial growth inhibition. In all cases, the identity of the compound causing microbiological inhibition was successfully confirmed.

  8. High spatiotemporal resolution monitoring of hydrological function across degraded peatlands in the south west UK.

    NASA Astrophysics Data System (ADS)

    Ashe, Josie; Luscombe, David; Grand-Clement, Emilie; Gatis, Naomi; Anderson, Karen; Brazier, Richard

    2014-05-01

    The Exmoor/Dartmoor Mires Project is a peatland restoration programme focused on the geoclimatically marginal blanket bogs of South West England. In order to better understand the hydrological functioning of degraded/restored peatlands and support land management decisions across these uplands, this study is providing robust spatially distributed, hydrological monitoring at a high temporal resolution and in near real time. This paper presents the conceptual framework and experimental design for three hydrological monitoring arrays situated in headwater catchments dominated by eroding and drained blanket peatland. Over 250 individual measurements are collected at a high temporal resolution (15 minute time-step) via sensors integrated within a remote telemetry system. These are sent directly to a dedicated server over VHF and GPRS mobile networks. Sensors arrays are distributed at varying spatial scales throughout the studied catchments and record multiple parameters including: water table depth, channel flow, temperature, conductivity and pH measurements. A full suite of meteorological sensors and ten spatially distributed automatic flow based water samplers are also connected to the telemetry system and controlled remotely. This paper will highlight the challenges and solutions to obtaining these data in exceptionally remote and harsh field conditions over long (multi annual) temporal scales.

  9. An automated procedure for detection of IDP's dwellings using VHR satellite imagery

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Malgorzata; Kemper, Thomas; Soille, Pierre

    2011-11-01

    This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and estimations of population especially in situations when frequent revisits by space imaging system are the only possibility of continued monitoring.

  10. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors.

    PubMed

    Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel

    2017-08-11

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.

  11. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors

    PubMed Central

    Lange, Maximilian; Rebmann, Corinna; Cuntz, Matthias; Doktor, Daniel

    2017-01-01

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure. PMID:28800065

  12. Estimating babassu palm density using automatic palm tree detection with very high spatial resolution satellite images.

    PubMed

    Dos Santos, Alessio Moreira; Mitja, Danielle; Delaître, Eric; Demagistri, Laurent; de Souza Miranda, Izildinha; Libourel, Thérèse; Petit, Michel

    2017-05-15

    High spatial resolution images as well as image processing and object detection algorithms are recent technologies that aid the study of biodiversity and commercial plantations of forest species. This paper seeks to contribute knowledge regarding the use of these technologies by studying randomly dispersed native palm tree. Here, we analyze the automatic detection of large circular crown (LCC) palm tree using a high spatial resolution panchromatic GeoEye image (0.50 m) taken on the area of a community of small agricultural farms in the Brazilian Amazon. We also propose auxiliary methods to estimate the density of the LCC palm tree Attalea speciosa (babassu) based on the detection results. We used the "Compt-palm" algorithm based on the detection of palm tree shadows in open areas via mathematical morphology techniques and the spatial information was validated using field methods (i.e. structural census and georeferencing). The algorithm recognized individuals in life stages 5 and 6, and the extraction percentage, branching factor and quality percentage factors were used to evaluate its performance. A principal components analysis showed that the structure of the studied species differs from other species. Approximately 96% of the babassu individuals in stage 6 were detected. These individuals had significantly smaller stipes than the undetected ones. In turn, 60% of the stage 5 babassu individuals were detected, showing significantly a different total height and a different number of leaves from the undetected ones. Our calculations regarding resource availability indicate that 6870 ha contained 25,015 adult babassu palm tree, with an annual potential productivity of 27.4 t of almond oil. The detection of LCC palm tree and the implementation of auxiliary field methods to estimate babassu density is an important first step to monitor this industry resource that is extremely important to the Brazilian economy and thousands of families over a large scale. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Impact of positional difference on the measurement of breast density using MRI.

    PubMed

    Chen, Jeon-Hor; Chan, Siwa; Tang, Yi-Ting; Hon, Jia Shen; Tseng, Po-Chuan; Cheriyan, Angela T; Shah, Nikita Rakesh; Yeh, Dah-Cherng; Lee, San-Kan; Chen, Wen-Pin; McLaren, Christine E; Su, Min-Ying

    2015-05-01

    This study investigated the impact of arms/hands and body position on the measurement of breast density using MRI. Noncontrast-enhanced T1-weighted images were acquired from 32 healthy women. Each subject received four MR scans using different experimental settings, including a high resolution hands-up, a low resolution hands-up, a high resolution hands-down, and finally, another high resolution hands-up after repositioning. The breast segmentation was performed using a fully automatic chest template-based method. The breast volume (BV), fibroglandular tissue volume (FV), and percent density (PD) measured from the four MR scan settings were analyzed. A high correlation of BV, FV, and PD between any pair of the four MR scans was noted (r > 0.98 for all). Using the generalized estimating equation method, a statistically significant difference in mean BV among four settings was noted (left breast, score test p = 0.0056; right breast, score test p = 0.0016), adjusted for age and body mass index. Despite differences in BV, there were no statistically significant differences in the mean PDs among the four settings (p > 0.10 for left and right breasts). Using Bland-Altman plots, the smallest mean difference/bias and standard deviations for BV, FV, and PD were noted when comparing hands-up high vs low resolution when the breast positions were exactly the same. The authors' study showed that BV, FV, and PD measurements from MRI of different positions were highly correlated. BV may vary with positions but the measured PD did not differ significantly between positions. The study suggested that the percent density analyzed from MRI studies acquired using different arms/hands and body positions from multiple centers can be combined for analysis.

  14. Cloud-Free Satellite Image Mosaics with Regression Trees and Histogram Matching.

    Treesearch

    E.H. Helmer; B. Ruefenacht

    2005-01-01

    Cloud-free optical satellite imagery simplifies remote sensing, but land-cover phenology limits existing solutions to persistent cloudiness to compositing temporally resolute, spatially coarser imagery. Here, a new strategy for developing cloud-free imagery at finer resolution permits simple automatic change detection. The strategy uses regression trees to predict...

  15. Automatic detection of larynx cancer from contrast-enhanced magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Doshi, Trushali; Soraghan, John; Grose, Derek; MacKenzie, Kenneth; Petropoulakis, Lykourgos

    2015-03-01

    Detection of larynx cancer from medical imaging is important for the quantification and for the definition of target volumes in radiotherapy treatment planning (RTP). Magnetic resonance imaging (MRI) is being increasingly used in RTP due to its high resolution and excellent soft tissue contrast. Manually detecting larynx cancer from sequential MRI is time consuming and subjective. The large diversity of cancer in terms of geometry, non-distinct boundaries combined with the presence of normal anatomical regions close to the cancer regions necessitates the development of automatic and robust algorithms for this task. A new automatic algorithm for the detection of larynx cancer from 2D gadoliniumenhanced T1-weighted (T1+Gd) MRI to assist clinicians in RTP is presented. The algorithm employs edge detection using spatial neighborhood information of pixels and incorporates this information in a fuzzy c-means clustering process to robustly separate different tissues types. Furthermore, it utilizes the information of the expected cancerous location for cancer regions labeling. Comparison of this automatic detection system with manual clinical detection on real T1+Gd axial MRI slices of 2 patients (24 MRI slices) with visible larynx cancer yields an average dice similarity coefficient of 0.78+/-0.04 and average root mean square error of 1.82+/-0.28 mm. Preliminary results show that this fully automatic system can assist clinicians in RTP by obtaining quantifiable and non-subjective repeatable detection results in a particular time-efficient and unbiased fashion.

  16. Design criteria for a high energy Compton Camera and possible application to targeted cancer therapy

    NASA Astrophysics Data System (ADS)

    Conka Nurdan, T.; Nurdan, K.; Brill, A. B.; Walenta, A. H.

    2015-07-01

    The proposed research focuses on the design criteria for a Compton Camera with high spatial resolution and sensitivity, operating at high gamma energies and its possible application for molecular imaging. This application is mainly on the detection and visualization of the pharmacokinetics of tumor targeting substances specific for particular cancer sites. Expected high resolution (< 0.5 mm) permits monitoring the pharmacokinetics of labeled gene constructs in vivo in small animals with a human tumor xenograft which is one of the first steps in evaluating the potential utility of a candidate gene. The additional benefit of high sensitivity detection will be improved cancer treatment strategies in patients based on the use of specific molecules binding to cancer sites for early detection of tumors and identifying metastasis, monitoring drug delivery and radionuclide therapy for optimum cell killing at the tumor site. This new technology can provide high resolution, high sensitivity imaging of a wide range of gamma energies and will significantly extend the range of radiotracers that can be investigated and used clinically. The small and compact construction of the proposed camera system allows flexible application which will be particularly useful for monitoring residual tumor around the resection site during surgery. It is also envisaged as able to test the performance of new drug/gene-based therapies in vitro and in vivo for tumor targeting efficacy using automatic large scale screening methods.

  17. cisTEM, user-friendly software for single-particle image processing.

    PubMed

    Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus

    2018-03-07

    We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.

  18. cisTEM, user-friendly software for single-particle image processing

    PubMed Central

    2018-01-01

    We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216

  19. Automatic detection and agronomic characterization of olive groves using high-resolution imagery and LIDAR data

    NASA Astrophysics Data System (ADS)

    Caruso, T.; Rühl, J.; Sciortino, R.; Marra, F. P.; La Scalia, G.

    2014-10-01

    The Common Agricultural Policy of the European Union grants subsidies for olive production. Areas of intensified olive farming will be of major importance for the increasing demand for oil production of the next decades, and countries with a high ratio of intensively and super-intensively managed olive groves will be more competitive than others, since they are able to reduce production costs. It can be estimated that about 25-40% of the Sicilian oliviculture must be defined as "marginal". Modern olive cultivation systems, which permit the mechanization of pruning and harvest operations, are limited. Agronomists, landscape planners, policy decision-makers and other professionals have a growing need for accurate and cost-effective information on land use in general and agronomic parameters in the particular. The availability of high spatial resolution imagery has enabled researchers to propose analysis tools on agricultural parcel and tree level. In our study, we test the performance of WorldView-2 imagery relative to the detection of olive groves and the delineation of olive tree crowns, using an object-oriented approach of image classification in combined use with LIDAR data. We selected two sites, which differ in their environmental conditions and in their agronomic parameters of olive grove cultivation. The main advantage of the proposed methodology is the low necessary quantity of data input and its automatibility. However, it should be applied in other study areas to test if the good results of accuracy assessment can be confirmed. Data extracted by the proposed methodology can be used as input data for decision-making support systems for olive grove management.

  20. SpotMetrics: An Open-Source Image-Analysis Software Plugin for Automatic Chromatophore Detection and Measurement.

    PubMed

    Hadjisolomou, Stavros P; El-Haddad, George

    2017-01-01

    Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, "SpotMetrics," that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines.

  1. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    PubMed Central

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851

  2. Thermal infrared panoramic imaging sensor

    NASA Astrophysics Data System (ADS)

    Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey

    2006-05-01

    Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.

  3. Distributed health care imaging information systems

    NASA Astrophysics Data System (ADS)

    Thompson, Mary R.; Johnston, William E.; Guojun, Jin; Lee, Jason; Tierney, Brian; Terdiman, Joseph F.

    1997-05-01

    We have developed an ATM network-based system to collect and catalogue cardio-angiogram videos from the source at a Kaiser central facility and make them available for viewing by doctors at primary care Kaiser facilities. This an example of the general problem of diagnostic data being generated at tertiary facilities, while the images, or other large data objects they produce, need to be used from a variety of other locations such as doctor's offices or local hospitals. We describe the use of a highly distributed computing and storage architecture to provide all aspects of collecting, storing, analyzing, and accessing such large data-objects in a metropolitan area ATM network. Our large data-object management system provides network interface between the object sources, the data management system and the user of the data. As the data is being stored, a cataloguing system automatically creates and stores condensed versions of the data, textural metadata and pointers to the original data. The catalogue system provides a Web-based graphical interface to the data. The user is able the view the low-resolution data with a standard Internet connection and Web browser. If high-resolution is required, a high-speed connection and special application programs can be used to view the high-resolution original data.

  4. A high-resolution peak fractionation approach for streamlined screening of nuclear-factor-E2-related factor-2 activators in Salvia miltiorrhiza.

    PubMed

    Zhang, Hui; Luo, Li-Ping; Song, Hui-Peng; Hao, Hai-Ping; Zhou, Ping; Qi, Lian-Wen; Li, Ping; Chen, Jun

    2014-01-24

    Generation of a high-purity fraction library for efficiently screening active compounds from natural products is challenging because of their chemical diversity and complex matrices. In this work, a strategy combining high-resolution peak fractionation (HRPF) with a cell-based assay was proposed for target screening of bioactive constituents from natural products. In this approach, peak fractionation was conducted under chromatographic conditions optimized for high-resolution separation of the natural product extract. The HRPF approach was automatically performed according to the predefinition of certain peaks based on their retention times from a reference chromatographic profile. The corresponding HRPF database was collected with a parallel mass spectrometer to ensure purity and characterize the structures of compounds in the various fractions. Using this approach, a set of 75 peak fractions on the microgram scale was generated from 4mg of the extract of Salvia miltiorrhiza. After screening by an ARE-luciferase reporter gene assay, 20 diterpene quinones were selected and identified, and 16 of these compounds were reported to possess novel Nrf2 activation activity. Compared with conventional fixed-time interval fractionation, the HRPF approach could significantly improve the efficiency of bioactive compound discovery and facilitate the uncovering of minor active components. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Distributed optical fiber temperature sensor (DOFTS) system applied to automatic temperature alarm of coal mine and tunnel

    NASA Astrophysics Data System (ADS)

    Zhang, Zaixuan; Wang, Kequan; Kim, Insoo S.; Wang, Jianfeng; Feng, Haiqi; Guo, Ning; Yu, Xiangdong; Zhou, Bangquan; Wu, Xiaobiao; Kim, Yohee

    2000-05-01

    The DOFTS system that has applied to temperature automatically alarm system of coal mine and tunnel has been researched. It is a real-time, on line and multi-point measurement system. The wavelength of LD is 1550 nm, on the 6 km optical fiber, 3000 points temperature signal is sampled and the spatial position is certain. Temperature measured region: -50 degree(s)C--100 degree(s)C; measured uncertain value: +/- 3 degree(s)C; temperature resolution: 0.1 degree(s)C; spatial resolution: <5 cm (optical fiber sensor probe); <8 m (spread optical fiber); measured time: <70 s. In the paper, the operated principles, underground test, test content and practical test results have been discussed.

  6. Automatic concrete cracks detection and mapping of terrestrial laser scan data

    NASA Astrophysics Data System (ADS)

    Rabah, Mostafa; Elhattab, Ahmed; Fayad, Atef

    2013-12-01

    Terrestrial laser scanning has become one of the standard technologies for object acquisition in surveying engineering. The high spatial resolution of imaging and the excellent capability of measuring the 3D space by laser scanning bear a great potential if combined for both data acquisition and data compilation. Automatic crack detection from concrete surface images is very effective for nondestructive testing. The crack information can be used to decide the appropriate rehabilitation method to fix the cracked structures and prevent any catastrophic failure. In practice, cracks on concrete surfaces are traced manually for diagnosis. On the other hand, automatic crack detection is highly desirable for efficient and objective crack assessment. The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.

  7. Image acquisition system for traffic monitoring applications

    NASA Astrophysics Data System (ADS)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.

  8. Automatic correspondence detection in mammogram and breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Krüger, Julia; Bischof, Arpad; Barkhausen, Jörg; Handels, Heinz

    2012-02-01

    Two-dimensional mammography is the major imaging modality in breast cancer detection. A disadvantage of mammography is the projective nature of this imaging technique. Tomosynthesis is an attractive modality with the potential to combine the high contrast and high resolution of digital mammography with the advantages of 3D imaging. In order to facilitate diagnostics and treatment in the current clinical work-flow, correspondences between tomosynthesis images and previous mammographic exams of the same women have to be determined. In this paper, we propose a method to detect correspondences in 2D mammograms and 3D tomosynthesis images automatically. In general, this 2D/3D correspondence problem is ill-posed, because a point in the 2D mammogram corresponds to a line in the 3D tomosynthesis image. The goal of our method is to detect the "most probable" 3D position in the tomosynthesis images corresponding to a selected point in the 2D mammogram. We present two alternative approaches to solve this 2D/3D correspondence problem: a 2D/3D registration method and a 2D/2D mapping between mammogram and tomosynthesis projection images with a following back projection. The advantages and limitations of both approaches are discussed and the performance of the methods is evaluated qualitatively and quantitatively using a software phantom and clinical breast image data. Although the proposed 2D/3D registration method can compensate for moderate breast deformations caused by different breast compressions, this approach is not suitable for clinical tomosynthesis data due to the limited resolution and blurring effects perpendicular to the direction of projection. The quantitative results show that the proposed 2D/2D mapping method is capable of detecting corresponding positions in mammograms and tomosynthesis images automatically for 61 out of 65 landmarks. The proposed method can facilitate diagnosis, visual inspection and comparison of 2D mammograms and 3D tomosynthesis images for the physician.

  9. Modified Fabry-Perot interferometer for displacement measurement in ultra large measuring range

    NASA Astrophysics Data System (ADS)

    Chang, Chung-Ping; Tung, Pi-Cheng; Shyu, Lih-Horng; Wang, Yung-Cheng; Manske, Eberhard

    2013-05-01

    Laser interferometers have demonstrated outstanding measuring performances for high precision positioning or dimensional measurements in the precision industry, especially in the length measurement. Due to the non-common-optical-path structure, appreciable measurement errors can be easily induced under ordinary measurement conditions. That will lead to the limitation and inconvenience for in situ industrial applications. To minimize the environmental and mechanical effects, a new interferometric displacement measuring system with the common-optical-path structure and the resistance to tilt-angle is proposed. With the integration of optomechatronic modules in the novel interferometric system, the resolution up to picometer order, high precision, and ultra large measuring range have been realized. For the signal stabilization of displacement measurement, an automatic gain control module has been proposed. A self-developed interpolation model has been employed for enhancing the resolution. The novel interferometer can hold the advantage of high resolution and large measuring range simultaneously. By the experimental verifications, it has been proven that the actual resolution of 2.5 nm can be achieved in the measuring range of 500 mm. According to the comparison experiments, the maximal standard deviation of the difference between the self-developed Fabry-Perot interferometer and the reference commercial Michelson interferometer is 0.146 μm in the traveling range of 500 mm. With the prominent measuring characteristics, this should be the largest dynamic measurement range of a Fabry-Perot interferometer up till now.

  10. Realistic 3D computer model of the gerbil middle ear, featuring accurate morphology of bone and soft tissue structures.

    PubMed

    Buytaert, Jan A N; Salih, Wasil H M; Dierick, Manual; Jacobs, Patric; Dirckx, Joris J J

    2011-12-01

    In order to improve realism in middle ear (ME) finite-element modeling (FEM), comprehensive and precise morphological data are needed. To date, micro-scale X-ray computed tomography (μCT) recordings have been used as geometric input data for FEM models of the ME ossicles. Previously, attempts were made to obtain these data on ME soft tissue structures as well. However, due to low X-ray absorption of soft tissue, quality of these images is limited. Another popular approach is using histological sections as data for 3D models, delivering high in-plane resolution for the sections, but the technique is destructive in nature and registration of the sections is difficult. We combine data from high-resolution μCT recordings with data from high-resolution orthogonal-plane fluorescence optical-sectioning microscopy (OPFOS), both obtained on the same gerbil specimen. State-of-the-art μCT delivers high-resolution data on the 3D shape of ossicles and other ME bony structures, while the OPFOS setup generates data of unprecedented quality both on bone and soft tissue ME structures. Each of these techniques is tomographic and non-destructive and delivers sets of automatically aligned virtual sections. The datasets coming from different techniques need to be registered with respect to each other. By combining both datasets, we obtain a complete high-resolution morphological model of all functional components in the gerbil ME. The resulting 3D model can be readily imported in FEM software and is made freely available to the research community. In this paper, we discuss the methods used, present the resulting merged model, and discuss the morphological properties of the soft tissue structures, such as muscles and ligaments.

  11. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95%), and progress is being made towards identifying the mapped spectral classes.

  12. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  13. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  14. Autonomous Exploration for Gathering Increased Science

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin J.; Castano, Rebecca; Estlin, Tara A.; Gaines, Daniel M.; Anderson, Robert C.; Thompson, David R.; DeGranville, Charles K.; Chien, Steve A.; Tang, Benyang; Burl, Michael C.; hide

    2010-01-01

    The Autonomous Exploration for Gathering Increased Science System (AEGIS) provides automated targeting for remote sensing instruments on the Mars Exploration Rover (MER) mission, which at the time of this reporting has had two rovers exploring the surface of Mars (see figure). Currently, targets for rover remote-sensing instruments must be selected manually based on imagery already on the ground with the operations team. AEGIS enables the rover flight software to analyze imagery onboard in order to autonomously select and sequence targeted remote-sensing observations in an opportunistic fashion. In particular, this technology will be used to automatically acquire sub-framed, high-resolution, targeted images taken with the MER panoramic cameras. This software provides: 1) Automatic detection of terrain features in rover camera images, 2) Feature extraction for detected terrain targets, 3) Prioritization of terrain targets based on a scientist target feature set, and 4) Automated re-targeting of rover remote-sensing instruments at the highest priority target.

  15. Comparison of manual and automatic techniques for substriatal segmentation in 11C-raclopride high-resolution PET studies.

    PubMed

    Johansson, Jarkko; Alakurtti, Kati; Joutsa, Juho; Tohka, Jussi; Ruotsalainen, Ulla; Rinne, Juha O

    2016-10-01

    The striatum is the primary target in regional C-raclopride-PET studies, and despite its small volume, it contains several functional and anatomical subregions. The outcome of the quantitative dopamine receptor study using C-raclopride-PET depends heavily on the quality of the region-of-interest (ROI) definition of these subregions. The aim of this study was to evaluate subregional analysis techniques because new approaches have emerged, but have not yet been compared directly. In this paper, we compared manual ROI delineation with several automatic methods. The automatic methods used either direct clustering of the PET image or individualization of chosen brain atlases on the basis of MRI or PET image normalization. State-of-the-art normalization methods and atlases were applied, including those provided in the FreeSurfer, Statistical Parametric Mapping8, and FSL software packages. Evaluation of the automatic methods was based on voxel-wise congruity with the manual delineations and the test-retest variability and reliability of the outcome measures using data from seven healthy male participants who were scanned twice with C-raclopride-PET on the same day. The results show that both manual and automatic methods can be used to define striatal subregions. Although most of the methods performed well with respect to the test-retest variability and reliability of binding potential, the smallest average test-retest variability and SEM were obtained using a connectivity-based atlas and PET normalization (test-retest variability=4.5%, SEM=0.17). The current state-of-the-art automatic ROI methods can be considered good alternatives for subjective and laborious manual segmentation in C-raclopride-PET studies.

  16. Automatic delineation of brain regions on MRI and PET images from the pig.

    PubMed

    Villadsen, Jonas; Hansen, Hanne D; Jørgensen, Louise M; Keller, Sune H; Andersen, Flemming L; Petersen, Ida N; Knudsen, Gitte M; Svarer, Claus

    2018-01-15

    The increasing use of the pig as a research model in neuroimaging requires standardized processing tools. For example, extraction of regional dynamic time series from brain PET images requires parcellation procedures that benefit from being automated. Manual inter-modality spatial normalization to a MRI atlas is operator-dependent, time-consuming, and can be inaccurate with lack of cortical radiotracer binding or skull uptake. A parcellated PET template that allows for automatic spatial normalization to PET images of any radiotracer. MRI and [ 11 C]Cimbi-36 PET scans obtained in sixteen pigs made the basis for the atlas. The high resolution MRI scans allowed for creation of an accurately averaged MRI template. By aligning the within-subject PET scans to their MRI counterparts, an averaged PET template was created in the same space. We developed an automatic procedure for spatial normalization of the averaged PET template to new PET images and hereby facilitated transfer of the atlas regional parcellation. Evaluation of the automatic spatial normalization procedure found the median voxel displacement to be 0.22±0.08mm using the MRI template with individual MRI images and 0.92±0.26mm using the PET template with individual [ 11 C]Cimbi-36 PET images. We tested the automatic procedure by assessing eleven PET radiotracers with different kinetics and spatial distributions by using perfusion-weighted images of early PET time frames. We here present an automatic procedure for accurate and reproducible spatial normalization and parcellation of pig PET images of any radiotracer with reasonable blood-brain barrier penetration. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The edge-preservation multi-classifier relearning framework for the classification of high-resolution remotely sensed imagery

    NASA Astrophysics Data System (ADS)

    Han, Xiaopeng; Huang, Xin; Li, Jiayi; Li, Yansheng; Yang, Michael Ying; Gong, Jianya

    2018-04-01

    In recent years, the availability of high-resolution imagery has enabled more detailed observation of the Earth. However, it is imperative to simultaneously achieve accurate interpretation and preserve the spatial details for the classification of such high-resolution data. To this aim, we propose the edge-preservation multi-classifier relearning framework (EMRF). This multi-classifier framework is made up of support vector machine (SVM), random forest (RF), and sparse multinomial logistic regression via variable splitting and augmented Lagrangian (LORSAL) classifiers, considering their complementary characteristics. To better characterize complex scenes of remote sensing images, relearning based on landscape metrics is proposed, which iteratively quantizes both the landscape composition and spatial configuration by the use of the initial classification results. In addition, a novel tri-training strategy is proposed to solve the over-smoothing effect of relearning by means of automatic selection of training samples with low classification certainties, which always distribute in or near the edge areas. Finally, EMRF flexibly combines the strengths of relearning and tri-training via the classification certainties calculated by the probabilistic output of the respective classifiers. It should be noted that, in order to achieve an unbiased evaluation, we assessed the classification accuracy of the proposed framework using both edge and non-edge test samples. The experimental results obtained with four multispectral high-resolution images confirm the efficacy of the proposed framework, in terms of both edge and non-edge accuracy.

  18. Velocities along Byrd Glacier, East Antarctica, derived from Automatic Feature Tracking

    NASA Astrophysics Data System (ADS)

    Stearns, L. A.; Hamilton, G. S.

    2003-12-01

    Automatic feature tracking techniques are applied to recently acquired ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) imagery in order to determine the velocity field of Byrd Glacier, East Antarctica. The software IMCORR tracks the displacement of surface features (crevasses, drift mounds) in time sequential images, to produce the velocity field. Due to its high resolution, ASTER imagery is ideally suited for detecting small features changes. The produced result is a dense array of velocity vectors, which allows more thorough characterization of glacier dynamics. Byrd Glacier drains approximately 20.5 km3 of ice into the Ross Ice Shelf every year. Previous studies have determined ice velocities for Byrd Glacier by using photogrammetry, field measurements and manual feature tracking. The most recent velocity data is from 1986 and, as evident in the West Antarctic ice streams, substantial changes in velocity can occur on decadal time scales. The application of ASTER-based velocities fills this time lapse, and increased temporal resolution allows for a more complete analysis of Byrd Glacier. The ASTER-derived ice velocities are used in updating mass balance and force budget calculations to assess the stability of Byrd Glacier. Ice thickness information from BEDMAP, surface slopes from the OSUDEM and a compilation of accumulation rates are used to complete the calculations.

  19. REFMAC5 for the refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murshudov, Garib N., E-mail: garib@ysbl.york.ac.uk; Skubák, Pavol; Lebedev, Andrey A.

    The general principles behind the macromolecular crystal structure refinement program REFMAC5 are described. This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement toolsmore » such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography.« less

  20. Visualization and tissue classification of human breast cancer images using ultrahigh-resolution OCT.

    PubMed

    Yao, Xinwen; Gan, Yu; Chang, Ernest; Hibshoosh, Hanina; Feldman, Sheldon; Hendon, Christine

    2017-03-01

    Breast cancer is one of the most common cancers, and recognized as the third leading cause of mortality in women. Optical coherence tomography (OCT) enables three dimensional visualization of biological tissue with micrometer level resolution at high speed, and can play an important role in early diagnosis and treatment guidance of breast cancer. In particular, ultra-high resolution (UHR) OCT provides images with better histological correlation. This paper compared UHR OCT performance with standard OCT in breast cancer imaging qualitatively and quantitatively. Automatic tissue classification algorithms were used to automatically detect invasive ductal carcinoma in ex vivo human breast tissue. Human breast tissues, including non-neoplastic/normal tissues from breast reduction and tumor samples from mastectomy specimens, were excised from patients at Columbia University Medical Center. The tissue specimens were imaged by two spectral domain OCT systems at different wavelengths: a home-built ultra-high resolution (UHR) OCT system at 800 nm (measured as 2.72 μm axial and 5.52 μm lateral) and a commercial OCT system at 1,300 nm with standard resolution (measured as 6.5 μm axial and 15 μm lateral), and their imaging performances were analyzed qualitatively. Using regional features derived from OCT images produced by the two systems, we developed an automated classification algorithm based on relevance vector machine (RVM) to differentiate hollow-structured adipose tissue against solid tissue. We further developed B-scan based features for RVM to classify invasive ductal carcinoma (IDC) against normal fibrous stroma tissue among OCT datasets produced by the two systems. For adipose classification, 32 UHR OCT B-scans from 9 normal specimens, and 28 standard OCT B-scans from 6 normal and 4 IDC specimens were employed. For IDC classification, 152 UHR OCT B-scans from 6 normal and 13 IDC specimens, and 104 standard OCT B-scans from 5 normal and 8 IDC specimens were employed. We have demonstrated that UHR OCT images can produce images with better feature delineation compared with images produced by 1,300 nm OCT system. UHR OCT images of a variety of tissue types found in human breast tissue were presented. With a limited number of datasets, we showed that both OCT systems can achieve a good accuracy in identifying adipose tissue. Classification in UHR OCT images achieved higher sensitivity (94%) and specificity (93%) of adipose tissue than the sensitivity (91%) and specificity (76%) in 1,300 nm OCT images. In IDC classification, similarly, we achieved better results with UHR OCT images, featured an overall accuracy of 84%, sensitivity of 89% and specificity of 71% in this preliminary study. In this study, we provided UHR OCT images of different normal and malignant breast tissue types, and qualitatively and quantitatively studied the texture and optical features from OCT images of human breast tissue at different resolutions. We developed an automated approach to differentiate adipose tissue, fibrous stroma, and IDC within human breast tissues. Our work may open the door toward automatic intraoperative OCT evaluation of early-stage breast cancer. Lasers Surg. Med. 49:258-269, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. DLA based compressed sensing for high resolution MR microscopy of neuronal tissue.

    PubMed

    Nguyen, Khieu-Van; Li, Jing-Rebecca; Radecki, Guillaume; Ciobanu, Luisa

    2015-10-01

    In this work we present the implementation of compressed sensing (CS) on a high field preclinical scanner (17.2 T) using an undersampling trajectory based on the diffusion limited aggregation (DLA) random growth model. When applied to a library of images this approach performs better than the traditional undersampling based on the polynomial probability density function. In addition, we show that the method is applicable to imaging live neuronal tissues, allowing significantly shorter acquisition times while maintaining the image quality necessary for identifying the majority of neurons via an automatic cell segmentation algorithm. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A new approach to untargeted integration of high resolution liquid chromatography-mass spectrometry data.

    PubMed

    van der Kloet, Frans M; Hendriks, Margriet; Hankemeier, Thomas; Reijmers, Theo

    2013-11-01

    Because of its high sensitivity and specificity, hyphenated mass spectrometry has become the predominant method to detect and quantify metabolites present in bio-samples relevant for all sorts of life science studies being executed. In contrast to targeted methods that are dedicated to specific features, global profiling acquisition methods allow new unspecific metabolites to be analyzed. The challenge with these so-called untargeted methods is the proper and automated extraction and integration of features that could be of relevance. We propose a new algorithm that enables untargeted integration of samples that are measured with high resolution liquid chromatography-mass spectrometry (LC-MS). In contrast to other approaches limited user interaction is needed allowing also less experienced users to integrate their data. The large amount of single features that are found within a sample is combined to a smaller list of, compound-related, grouped feature-sets representative for that sample. These feature-sets allow for easier interpretation and identification and as important, easier matching over samples. We show that the automatic obtained integration results for a set of known target metabolites match those generated with vendor software but that at least 10 times more feature-sets are extracted as well. We demonstrate our approach using high resolution LC-MS data acquired for 128 samples on a lipidomics platform. The data was also processed in a targeted manner (with a combination of automatic and manual integration) using vendor software for a set of 174 targets. As our untargeted extraction procedure is run per sample and per mass trace the implementation of it is scalable. Because of the generic approach, we envision that this data extraction lipids method will be used in a targeted as well as untargeted analysis of many different kinds of TOF-MS data, even CE- and GC-MS data or MRM. The Matlab package is available for download on request and efforts are directed toward a user-friendly Windows executable. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Automated Verification of Spatial Resolution in Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald

    2011-01-01

    Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data set, enabling the appropriate use of those images in a number of applications.

  4. High-throughput analysis of yeast replicative aging using a microfluidic system

    PubMed Central

    Jo, Myeong Chan; Liu, Wei; Gu, Liang; Dang, Weiwei; Qin, Lidong

    2015-01-01

    Saccharomyces cerevisiae has been an important model for studying the molecular mechanisms of aging in eukaryotic cells. However, the laborious and low-throughput methods of current yeast replicative lifespan assays limit their usefulness as a broad genetic screening platform for research on aging. We address this limitation by developing an efficient, high-throughput microfluidic single-cell analysis chip in combination with high-resolution time-lapse microscopy. This innovative design enables, to our knowledge for the first time, the determination of the yeast replicative lifespan in a high-throughput manner. Morphological and phenotypical changes during aging can also be monitored automatically with a much higher throughput than previous microfluidic designs. We demonstrate highly efficient trapping and retention of mother cells, determination of the replicative lifespan, and tracking of yeast cells throughout their entire lifespan. Using the high-resolution and large-scale data generated from the high-throughput yeast aging analysis (HYAA) chips, we investigated particular longevity-related changes in cell morphology and characteristics, including critical cell size, terminal morphology, and protein subcellular localization. In addition, because of the significantly improved retention rate of yeast mother cell, the HYAA-Chip was capable of demonstrating replicative lifespan extension by calorie restriction. PMID:26170317

  5. The Effect of Rainfall Measurement Technique and Its Spatiotemporal Resolution on Discharge Predictions in the Netherlands

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Brauer, C.; Overeem, A.; Sassi, M.; Rios Gaona, M. F.

    2014-12-01

    Several rainfall measurement techniques are available for hydrological applications, each with its own spatial and temporal resolution. We investigated the effect of these spatiotemporal resolutions on discharge simulations in lowland catchments by forcing a novel rainfall-runoff model (WALRUS) with rainfall data from gauges, radars and microwave links. The hydrological model used for this analysis is the recently developed Wageningen Lowland Runoff Simulator (WALRUS). WALRUS is a rainfall-runoff model accounting for hydrological processes relevant to areas with shallow groundwater (e.g. groundwater-surface water feedback). Here, we used WALRUS for case studies in a freely draining lowland catchment and a polder with controlled water levels. We used rain gauge networks with automatic (hourly resolution but low spatial density) and manual gauges (high spatial density but daily resolution). Operational (real-time) and climatological (gauge-adjusted) C-band radar products and country-wide rainfall maps derived from microwave link data from a cellular telecommunication network were also used. Discharges simulated with these different inputs were compared to observations. We also investigated the effect of spatiotemporal resolution with a high-resolution X-band radar data set for catchments with different sizes. Uncertainty in rainfall forcing is a major source of uncertainty in discharge predictions, both with lumped and with distributed models. For lumped rainfall-runoff models, the main source of input uncertainty is associated with the way in which (effective) catchment-average rainfall is estimated. When catchments are divided into sub-catchments, rainfall spatial variability can become more important, especially during convective rainfall events, leading to spatially varying catchment wetness and spatially varying contribution of quick flow routes. Improving rainfall measurements and their spatiotemporal resolution can improve the performance of rainfall-runoff models, indicating their potential for reducing flood damage through real-time control.

  6. Computer vision-based diameter maps to study fluoroscopic recordings of small intestinal motility from conscious experimental animals.

    PubMed

    Ramírez, I; Pantrigo, J J; Montemayor, A S; López-Pérez, A E; Martín-Fontelles, M I; Brookes, S J H; Abalo, R

    2017-08-01

    When available, fluoroscopic recordings are a relatively cheap, non-invasive and technically straightforward way to study gastrointestinal motility. Spatiotemporal maps have been used to characterize motility of intestinal preparations in vitro, or in anesthetized animals in vivo. Here, a new automated computer-based method was used to construct spatiotemporal motility maps from fluoroscopic recordings obtained in conscious rats. Conscious, non-fasted, adult, male Wistar rats (n=8) received intragastric administration of barium contrast, and 1-2 hours later, when several loops of the small intestine were well-defined, a 2 minutes-fluoroscopic recording was obtained. Spatiotemporal diameter maps (Dmaps) were automatically calculated from the recordings. Three recordings were also manually analyzed for comparison. Frequency analysis was performed in order to calculate relevant motility parameters. In each conscious rat, a stable recording (17-20 seconds) was analyzed. The Dmaps manually and automatically obtained from the same recording were comparable, but the automated process was faster and provided higher resolution. Two frequencies of motor activity dominated; lower frequency contractions (15.2±0.9 cpm) had an amplitude approximately five times greater than higher frequency events (32.8±0.7 cpm). The automated method developed here needed little investigator input, provided high-resolution results with short computing times, and automatically compensated for breathing and other small movements, allowing recordings to be made without anesthesia. Although slow and/or infrequent events could not be detected in the short recording periods analyzed to date (17-20 seconds), this novel system enhances the analysis of in vivo motility in conscious animals. © 2017 John Wiley & Sons Ltd.

  7. Automatic three-dimensional registration of intravascular optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Ughi, Giovanni J.; Adriaenssens, Tom; Larsson, Matilda; Dubois, Christophe; Sinnaeve, Peter R.; Coosemans, Mark; Desmet, Walter; D'hooge, Jan

    2012-02-01

    Intravascular optical coherence tomography (IV-OCT) is a catheter-based high-resolution imaging technique able to visualize the inner wall of the coronary arteries and implanted devices in vivo with an axial resolution below 20 μm. IV-OCT is being used in several clinical trials aiming to quantify the vessel response to stent implantation over time. However, stent analysis is currently performed manually and corresponding images taken at different time points are matched through a very labor-intensive and subjective procedure. We present an automated method for the spatial registration of IV-OCT datasets. Stent struts are segmented through consecutive images and three-dimensional models of the stents are created for both datasets to be registered. The two models are initially roughly registered through an automatic initialization procedure and an iterative closest point algorithm is subsequently applied for a more precise registration. To correct for nonuniform rotational distortions (NURDs) and other potential acquisition artifacts, the registration is consecutively refined on a local level. The algorithm was first validated by using an in vitro experimental setup based on a polyvinyl-alcohol gel tubular phantom. Subsequently, an in vivo validation was obtained by exploiting stable vessel landmarks. The mean registration error in vitro was quantified to be 0.14 mm in the longitudinal axis and 7.3-deg mean rotation error. In vivo validation resulted in 0.23 mm in the longitudinal axis and 10.1-deg rotation error. These results indicate that the proposed methodology can be used for automatic registration of in vivo IV-OCT datasets. Such a tool will be indispensable for larger studies on vessel healing pathophysiology and reaction to stent implantation. As such, it will be valuable in testing the performance of new generations of intracoronary devices and new therapeutic drugs.

  8. Micro axial tomography: A miniaturized, versatile stage device to overcome resolution anisotropy in fluorescence light microscopy

    NASA Astrophysics Data System (ADS)

    Staier, Florian; Eipel, Heinz; Matula, Petr; Evsikov, Alexei V.; Kozubek, Michal; Cremer, Christoph; Hausmann, Michael

    2011-09-01

    With the development of novel fluorescence techniques, high resolution light microscopy has become a challenging technique for investigations of the three-dimensional (3D) micro-cosmos in cells and sub-cellular components. So far, all fluorescence microscopes applied for 3D imaging in biosciences show a spatially anisotropic point spread function resulting in an anisotropic optical resolution or point localization precision. To overcome this shortcoming, micro axial tomography was suggested which allows object tilting on the microscopic stage and leads to an improvement in localization precision and spatial resolution. Here, we present a miniaturized device which can be implemented in a motor driven microscope stage. The footprint of this device corresponds to a standard microscope slide. A special glass fiber can manually be adjusted in the object space of the microscope lens. A stepwise fiber rotation can be controlled by a miniaturized stepping motor incorporated into the device. By means of a special mounting device, test particles were fixed onto glass fibers, optically localized with high precision, and automatically rotated to obtain views from different perspective angles under which distances of corresponding pairs of objects were determined. From these angle dependent distance values, the real 3D distance was calculated with a precision in the ten nanometer range (corresponding here to an optical resolution of 10-30 nm) using standard microscopic equipment. As a proof of concept, the spindle apparatus of a mature mouse oocyte was imaged during metaphase II meiotic arrest under different perspectives. Only very few images registered under different rotation angles are sufficient for full 3D reconstruction. The results indicate the principal advantage of the micro axial tomography approach for many microscopic setups therein and also those of improved resolutions as obtained by high precision localization determination.

  9. High spatial resolution mapping of folds and fractures using Unmanned Aerial Vehicle (UAV) photogrammetry

    NASA Astrophysics Data System (ADS)

    Cruden, A. R.; Vollgger, S.

    2016-12-01

    The emerging capability of UAV photogrammetry combines a simple and cost-effective method to acquire digital aerial images with advanced computer vision algorithms that compute spatial datasets from a sequence of overlapping digital photographs from various viewpoints. Depending on flight altitude and camera setup, sub-centimeter spatial resolution orthophotographs and textured dense point clouds can be achieved. Orientation data can be collected for detailed structural analysis by digitally mapping such high-resolution spatial datasets in a fraction of time and with higher fidelity compared to traditional mapping techniques. Here we describe a photogrammetric workflow applied to a structural study of folds and fractures within alternating layers of sandstone and mudstone at a coastal outcrop in SE Australia. We surveyed this location using a downward looking digital camera mounted on commercially available multi-rotor UAV that autonomously followed waypoints at a set altitude and speed to ensure sufficient image overlap, minimum motion blur and an appropriate resolution. The use of surveyed ground control points allowed us to produce a geo-referenced 3D point cloud and an orthophotograph from hundreds of digital images at a spatial resolution < 10 mm per pixel, and cm-scale location accuracy. Orientation data of brittle and ductile structures were semi-automatically extracted from these high-resolution datasets using open-source software. This resulted in an extensive and statistically relevant orientation dataset that was used to 1) interpret the progressive development of folds and faults in the region, and 2) to generate a 3D structural model that underlines the complex internal structure of the outcrop and quantifies spatial variations in fold geometries. Overall, our work highlights how UAV photogrammetry can contribute to new insights in structural analysis.

  10. Automatic alignment of individual peaks in large high-resolution spectral data sets

    NASA Astrophysics Data System (ADS)

    Stoyanova, Radka; Nicholls, Andrew W.; Nicholson, Jeremy K.; Lindon, John C.; Brown, Truman R.

    2004-10-01

    Pattern recognition techniques are effective tools for reducing the information contained in large spectral data sets to a much smaller number of significant features which can then be used to make interpretations about the chemical or biochemical system under study. Often the effectiveness of such approaches is impeded by experimental and instrument induced variations in the position, phase, and line width of the spectral peaks. Although characterizing the cause and magnitude of these fluctuations could be important in its own right (pH-induced NMR chemical shift changes, for example) in general they obscure the process of pattern discovery. One major area of application is the use of large databases of 1H NMR spectra of biofluids such as urine for investigating perturbations in metabolic profiles caused by drugs or disease, a process now termed metabonomics. Frequency shifts of individual peaks are the dominant source of such unwanted variations in this type of data. In this paper, an automatic procedure for aligning the individual peaks in the data set is described and evaluated. The proposed method will be vital for the efficient and automatic analysis of large metabonomic data sets and should also be applicable to other types of data.

  11. UAS-based automatic bird count of a common gull colony

    NASA Astrophysics Data System (ADS)

    Grenzdörffer, G. J.

    2013-08-01

    The standard procedure to count birds is a manual one. However a manual bird count is a time consuming and cumbersome process, requiring several people going from nest to nest counting the birds and the clutches. High resolution imagery, generated with a UAS (Unmanned Aircraft System) offer an interesting alternative. Experiences and results of UAS surveys for automatic bird count of the last two years are presented for the bird reserve island Langenwerder. For 2011 1568 birds (± 5%) were detected on the image mosaic, based on multispectral image classification and GIS-based post processing. Based on the experiences of 2011 the results and the accuracy of the automatic bird count 2012 became more efficient. For 2012 1938 birds with an accuracy of approx. ± 3% were counted. Additionally a separation of breeding and non-breeding birds was performed with the assumption, that standing birds cause a visible shade. The final section of the paper is devoted to the analysis of the 3D-point cloud. Thereby the point cloud was used to determine the height of the vegetation and the extend and depth of closed sinks, which are unsuitable for breeding birds.

  12. Automatic segmentation of low-visibility moving objects through energy analyis of the local 3D spectrum

    NASA Astrophysics Data System (ADS)

    Nestares, Oscar; Miravet, Carlos; Santamaria, Javier; Fonolla Navarro, Rafael

    1999-05-01

    Automatic object segmentation in highly noisy image sequences, composed by a translating object over a background having a different motion, is achieved through joint motion-texture analysis. Local motion and/or texture is characterized by the energy of the local spatio-temporal spectrum, as different textures undergoing different translational motions display distinctive features in their 3D (x,y,t) spectra. Measurements of local spectrum energy are obtained using a bank of directional 3rd order Gaussian derivative filters in a multiresolution pyramid in space- time (10 directions, 3 resolution levels). These 30 energy measurements form a feature vector describing texture-motion for every pixel in the sequence. To improve discrimination capability and reduce computational cost, we automatically select those 4 features (channels) that best discriminate object from background, under the assumptions that the object is smaller than the background and has a different velocity or texture. In this way we reject features irrelevant or dominated by noise, that could yield wrong segmentation results. This method has been successfully applied to sequences with extremely low visibility and for objects that are even invisible for the eye in absence of motion.

  13. Conformation-dependent restraints for polynucleotides: I. Clustering of the geometry of the phosphodiester group

    PubMed Central

    Kowiel, Marcin; Brzezinski, Dariusz; Jaskolski, Mariusz

    2016-01-01

    The refinement of macromolecular structures is usually aided by prior stereochemical knowledge in the form of geometrical restraints. Such restraints are also used for the flexible sugar-phosphate backbones of nucleic acids. However, recent highly accurate structural studies of DNA suggest that the phosphate bond angles may have inadequate description in the existing stereochemical dictionaries. In this paper, we analyze the bonding deformations of the phosphodiester groups in the Cambridge Structural Database, cluster the studied fragments into six conformation-related categories and propose a revised set of restraints for the O-P-O bond angles and distances. The proposed restraints have been positively validated against data from the Nucleic Acid Database and an ultrahigh-resolution Z-DNA structure in the Protein Data Bank. Additionally, the manual classification of PO4 geometry is compared with geometrical clusters automatically discovered by machine learning methods. The machine learning cluster analysis provides useful insights and a practical example for general applications of clustering algorithms for automatic discovery of hidden patterns of molecular geometry. Finally, we describe the implementation and application of a public-domain web server for automatic generation of the proposed restraints. PMID:27521371

  14. Hierarchically Structured Non-Intrusive Sign Language Recognition. Chapter 2

    NASA Technical Reports Server (NTRS)

    Zieren, Jorg; Zieren, Jorg; Kraiss, Karl-Friedrich

    2007-01-01

    This work presents a hierarchically structured approach at the nonintrusive recognition of sign language from a monocular frontal view. Robustness is achieved through sophisticated localization and tracking methods, including a combined EM/CAMSHIFT overlap resolution procedure and the parallel pursuit of multiple hypotheses about hands position and movement. This allows handling of ambiguities and automatically corrects tracking errors. A biomechanical skeleton model and dynamic motion prediction using Kalman filters represents high level knowledge. Classification is performed by Hidden Markov Models. 152 signs from German sign language were recognized with an accuracy of 97.6%.

  15. Multiple directed graph large-class multi-spectral processor

    NASA Technical Reports Server (NTRS)

    Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki

    1988-01-01

    Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.

  16. Towards fish-eye camera based in-home activity assessment.

    PubMed

    Bas, Erhan; Erdogmus, Deniz; Ozertem, Umut; Pavel, Misha

    2008-01-01

    Indoors localization, activity classification, and behavioral modeling are increasingly important for surveillance applications including independent living and remote health monitoring. In this paper, we study the suitability of fish-eye cameras (high-resolution CCD sensors with very-wide-angle lenses) for the purpose of monitoring people in indoors environments. The results indicate that these sensors are very useful for automatic activity monitoring and people tracking. We identify practical and mathematical problems related to information extraction from these video sequences and identify future directions to solve these issues.

  17. Automatic Traffic Advisory and Resolution Service (ATARS) Algorithms Including Resolution-Advisory-Register Logic. Volume 2. Sections 12 through 19. Appendices,

    DTIC Science & Technology

    1981-06-01

    pairwise conflict or an indication of BCAS control . A Pair Record is also created when an aircraft receives a resolution advisory from BCAS or from a non ...replying site: Update track numbers: ILS!I’ (pair record shows a non -connected site in control ) T"_N CALL AI!CPAFTPAIRriwTIFICRTaOI: ( both aircraft...Springfield, Virginia 22161 a>- U S Department of Transportain Systems Research & Development Service LWashington, D.C. 20590 94 This document is

  18. Towards the automatic detection and analysis of sunspot rotation

    NASA Astrophysics Data System (ADS)

    Brown, Daniel S.; Walker, Andrew P.

    2016-10-01

    Torsional rotation of sunspots have been noted by many authors over the past century. Sunspots have been observed to rotate up to the order of 200 degrees over 8-10 days, and these have often been linked with eruptive behaviour such as solar flares and coronal mass ejections. However, most studies in the literature are case studies or small-number studies which suffer from selection bias. In order to better understand sunspot rotation and its impact on the corona, unbiased large-sample statistical studies are required (including both rotating and non-rotating sunspots). While this can be done manually, a better approach is to automate the detection and analysis of rotating sunspots using robust methods with well characterised uncertainties. The SDO/HMI instrument provide long-duration, high-resolution and high-cadence continuum observations suitable for extracting a large number of examples of rotating sunspots. This presentation will outline the analysis of SDI/HMI data to determine the rotation (and non-rotation) profiles of sunspots for the complete duration of their transit across the solar disk, along with how this can be extended to automatically identify sunspots and initiate their analysis.

  19. Integrated enzyme reactor and high resolving chromatography in "sub-chip" dimensions for sensitive protein mass spectrometry.

    PubMed

    Hustoft, Hanne Kolsrud; Brandtzaeg, Ole Kristian; Rogeberg, Magnus; Misaghian, Dorna; Torsetnes, Silje Bøen; Greibrokk, Tyge; Reubsaet, Léon; Wilson, Steven Ray; Lundanes, Elsa

    2013-12-16

    Reliable, sensitive and automatable analytical methodology is of great value in e.g. cancer diagnostics. In this context, an on-line system for enzymatic cleavage of proteins, subsequent peptide separation by liquid chromatography (LC) with mass spectrometric detection has been developed using "sub-chip" columns (10-20 μm inner diameter, ID). The system could detect attomole amounts of isolated cancer biomarker progastrin-releasing peptide (ProGRP), in a more automatable fashion compared to previous methods. The workflow combines protein digestion using an 20 μm ID immobilized trypsin reactor with a polymeric layer of 2-hydroxyethyl methacrylate-vinyl azlactone (HEMA-VDM), desalting on a polystyrene-divinylbenzene (PS-DVB) monolithic trap column, and subsequent separation of resulting peptides on a 10 μm ID (PS-DVB) porous layer open tubular (PLOT) column. The high resolution of the PLOT columns was maintained in the on-line system, resulting in narrow chromatographic peaks of 3-5 seconds. The trypsin reactors provided repeatable performance and were compatible with long-term storage.

  20. Rainfall estimation from microwave links in São Paulo, Brazil.

    NASA Astrophysics Data System (ADS)

    Rios Gaona, Manuel Felipe; Overeem, Aart; Leijnse, Hidde; Uijlenhoet, Remko

    2017-04-01

    Rainfall estimation from microwave link networks has been successfully demonstrated in countries such as the Netherlands, Israel and Germany. The path-averaged rainfall intensity can be computed from the signal attenuation between cell phone towers. Although this technique is still in development, it offers great opportunities to retrieve rainfall rates at high spatiotemporal resolutions very close to the ground surface. High spatiotemporal resolutions and closer-to-ground measurements are highly appreciated, especially in urban catchments where high-impact events such as flash-floods develop in short time scales. We evaluate here this rainfall measurement technique for a tropical climate, something that has hardly been done previously. This is highly relevant since many countries with few surface rainfall observations are located in the tropics. The test-bed is the Brazilian city of São Paulo. The performance of 16 microwave links was evaluated, from a network of 200 links, for the last 3 months of 2014. The open software package RAINLINK was employed to obtain link rainfall estimates. The evaluation was done through a dense automatic gauge network. Results are promising and encouraging, especially for short links for which a high correlation (> 0.9) and a low bias (< 5%) were obtained.

  1. a Rough Set Decision Tree Based Mlp-Cnn for Very High Resolution Remotely Sensed Image Classification

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.

    2017-09-01

    Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  2. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  3. G.A.M.E.: GPU-accelerated mixture elucidator.

    PubMed

    Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J

    2017-09-15

    GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .

  4. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  5. Data Flow for the TERRA-REF project

    NASA Astrophysics Data System (ADS)

    Kooper, R.; Burnette, M.; Maloney, J.; LeBauer, D.

    2017-12-01

    The Transportation Energy Resources from Renewable Agriculture Phenotyping Reference Platform (TERRA-REF) program aims to identify crop traits that are best suited to producing high-energy sustainable biofuels and match those plant characteristics to their genes to speed the plant breeding process. One tool used to achieve this goal is a high-throughput phenotyping robot outfitted with sensors and cameras to monitor the growth of 1.25 acres of sorghum. Data types range from hyperspectral imaging to 3D reconstructions and thermal profiles, all at 1mm resolution. This system produces thousands of daily measurements with high spatiotemporal resolution. The team at NCSA processes, annotates, organizes and stores the massive amounts of data produced by this system - up to 5 TB per day. Data from the sensors is streamed to a local gantry-cache server. The standardized sensor raw data stream is automatically and securely delivered to NCSA using Globus Connect service. Once files have been successfully received by the Globus endpoint, the files are removed from the gantry-cache server. As each dataset arrives or is created the Clowder system automatically triggers different software tools to analyze each file, extract information, and convert files to a common format. Other tools can be triggered to run after all required data is uploaded. For example, a stitched image of the entire field is created after all images of the field become available. Some of these tools were developed by external collaborators based on predictive models and algorithms, others were developed as part of other projects and could be leveraged by the TERRA project. Data will be stored for the lifetime of the project and is estimated to reach 10 PB over 3 years. The Clowder system, BETY and other systems will allow users to easily find data by browsing or searching the extracted information.

  6. Cerebral Correlates of Automatic Associations Towards Performance Enhancing Substances

    PubMed Central

    Schindler, Sebastian; Wolff, Wanja

    2015-01-01

    The direct assessment of explicit attitudes toward performance enhancing substances, for example Neuroenhancement or doping in sports, can be affected by social desirability biases and cheating attempts. According to Dual Process Theories of cognition, indirect measures like the Implicit Association Test (IAT) measure automatic associations toward a topic (as opposed to explicit attitudes measured by self-report measures). Such automatic associations are thought to occur rapidly and to evade voluntary control. However, whether or not such indirect tests actually reflect automatic associations is difficult to validate. Electroencephalography (EEG) has a superior time resolution which can differentiate between highly automatic compared to more elaborate processing stages. We therefore used EEG to examine on which processing stages cortical differences between negative or positive attitudes to doping occur, and whether or not these differences can be related to BIAT scores. We tested 42 university students (31 females, 24.43 ± 3.17 years old), who were requested to complete a brief doping IAT (BIAT) on attitudes toward doping. Cerebral activity during doping BIAT completion was assessed using high-density EEG. Behaviorally, participants D-scores exhibited negative attitudes toward doping, represented by faster reaction times in the doping + dislike pairing task. Event-related potentials (ERPs) revealed earliest effects between 200 and 300 ms. Here, a relatively larger occipital positivity was found for the doping + dislike pairing task. Further, in the LPP time range between 400 and 600 ms a larger late positive potential was found for the doping + dislike pairing task over central regions. These LPP amplitude differences were successfully predicting participants' BIAT D-scores. Results indicate that event-related potentials differentiate between positive and negative doping attitudes at stages of mid-latency. However, it seems that IAT scores can be predicted only by the later occurring LPP. Our study is the first to investigate the cerebral correlates that contribute to test scores obtained in the indirect testing of automatic associations toward doping. The implications of our results for the broader NE concept are discussed in light of the conceptual similarity of doping and NE. PMID:26733914

  7. Automatic Earthquake Detection and Location by Waveform coherency in Alentejo (South Portugal) Using CatchPy

    NASA Astrophysics Data System (ADS)

    Custodio, S.; Matos, C.; Grigoli, F.; Cesca, S.; Heimann, S.; Rio, I.

    2015-12-01

    Seismic data processing is currently undergoing a step change, benefitting from high-volume datasets and advanced computer power. In the last decade, a permanent seismic network of 30 broadband stations, complemented by dense temporary deployments, covered mainland Portugal. This outstanding regional coverage currently enables the computation of a high-resolution image of the seismicity of Portugal, which contributes to fitting together the pieces of the regional seismo-tectonic puzzle. Although traditional manual inspections are valuable to refine automatic results they are impracticable with the big data volumes now available. When conducted alone they are also less objective since the criteria is defined by the analyst. In this work we present CatchPy, a scanning algorithm to detect earthquakes in continuous datasets. Our main goal is to implement an automatic earthquake detection and location routine in order to have a tool to quickly process large data sets, while at the same time detecting low magnitude earthquakes (i.e. lowering the detection threshold). CatchPY is designed to produce an event database that could be easily located using existing location codes (e.g.: Grigoli et al. 2013, 2014). We use CatchPy to perform automatic detection and location of earthquakes that occurred in Alentejo region (South Portugal), taking advantage of a dense seismic network deployed in the region for two years during the DOCTAR experiment. Results show that our automatic procedure is particularly suitable for small aperture networks. The event detection is performed by continuously computing the short-term-average/long-term-average of two different characteristic functions (CFs). For the P phases we used a CF based on the vertical energy trace while for S phases we used a CF based on the maximum eigenvalue of the instantaneous covariance matrix (Vidale 1991). Seismic event location is performed by waveform coherence analysis, scanning different hypocentral coordinates (Grigoli et al. 2013, 2014). The reliability of automatic detections, phase pickings and locations are tested trough the quantitative comparison with manual results. This work is supported by project QuakeLoc, reference: PTDC/GEO-FIQ/3522/2012

  8. Detecting personnel around UGVs using stereo vision

    NASA Astrophysics Data System (ADS)

    Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.

    2008-04-01

    Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.

  9. Validating Intravascular Imaging with Serial Optical Coherence Tomography and Confocal Fluorescence Microscopy.

    PubMed

    Tardif, Pier-Luc; Bertrand, Marie-Jeanne; Abran, Maxime; Castonguay, Alexandre; Lefebvre, Joël; Stähli, Barbara E; Merlet, Nolwenn; Mihalache-Avram, Teodora; Geoffroy, Pascale; Mecteau, Mélanie; Busseuil, David; Ni, Feng; Abulrob, Abedelnasser; Rhéaume, Éric; L'Allier, Philippe; Tardif, Jean-Claude; Lesage, Frédéric

    2016-12-15

    Atherosclerotic cardiovascular diseases are characterized by the formation of a plaque in the arterial wall. Intravascular ultrasound (IVUS) provides high-resolution images allowing delineation of atherosclerotic plaques. When combined with near infrared fluorescence (NIRF), the plaque can also be studied at a molecular level with a large variety of biomarkers. In this work, we present a system enabling automated volumetric histology imaging of excised aortas that can spatially correlate results with combined IVUS/NIRF imaging of lipid-rich atheroma in cholesterol-fed rabbits. Pullbacks in the rabbit aortas were performed with a dual modality IVUS/NIRF catheter developed by our group. Ex vivo three-dimensional (3D) histology was performed combining optical coherence tomography (OCT) and confocal fluorescence microscopy, providing high-resolution anatomical and molecular information, respectively, to validate in vivo findings. The microscope was combined with a serial slicer allowing for the imaging of the whole vessel automatically. Colocalization of in vivo and ex vivo results is demonstrated. Slices can then be recovered to be tested in conventional histology.

  10. Towards real-time metabolic profiling of a biopsy specimen during a surgical operation by 1H high resolution magic angle spinning nuclear magnetic resonance: a case report

    PubMed Central

    2012-01-01

    Introduction Providing information on cancerous tissue samples during a surgical operation can help surgeons delineate the limits of a tumoral invasion more reliably. Here, we describe the use of metabolic profiling of a colon biopsy specimen by high resolution magic angle spinning nuclear magnetic resonance spectroscopy to evaluate tumoral invasion during a simulated surgical operation. Case presentation Biopsy specimens (n = 9) originating from the excised right colon of a 66-year-old Caucasian women with an adenocarcinoma were automatically analyzed using a previously built statistical model. Conclusions Metabolic profiling results were in full agreement with those of a histopathological analysis. The time-response of the technique is sufficiently fast for it to be used effectively during a real operation (17 min/sample). Metabolic profiling has the potential to become a method to rapidly characterize cancerous biopsies in the operation theater. PMID:22257563

  11. The Aeroflex: A Bicycle for Mobile Air Quality Measurements

    PubMed Central

    Elen, Bart; Peters, Jan; Van Poppel, Martine; Bleux, Nico; Theunis, Jan; Reggente, Matteo; Standaert, Arnout

    2013-01-01

    Fixed air quality stations have limitations when used to assess people's real life exposure to air pollutants. Their spatial coverage is too limited to capture the spatial variability in, e.g., an urban or industrial environment. Complementary mobile air quality measurements can be used as an additional tool to fill this void. In this publication we present the Aeroflex, a bicycle for mobile air quality monitoring. The Aeroflex is equipped with compact air quality measurement devices to monitor ultrafine particle number counts, particulate mass and black carbon concentrations at a high resolution (up to 1 second). Each measurement is automatically linked to its geographical location and time of acquisition using GPS and Internet time. Furthermore, the Aeroflex is equipped with automated data transmission, data pre-processing and data visualization. The Aeroflex is designed with adaptability, reliability and user friendliness in mind. Over the past years, the Aeroflex has been successfully used for high resolution air quality mapping, exposure assessment and hot spot identification. PMID:23262484

  12. Urban Area Detection in Very High Resolution Remote Sensing Images Using Deep Convolutional Neural Networks.

    PubMed

    Tian, Tian; Li, Chang; Xu, Jinkang; Ma, Jiayi

    2018-03-18

    Detecting urban areas from very high resolution (VHR) remote sensing images plays an important role in the field of Earth observation. The recently-developed deep convolutional neural networks (DCNNs), which can extract rich features from training data automatically, have achieved outstanding performance on many image classification databases. Motivated by this fact, we propose a new urban area detection method based on DCNNs in this paper. The proposed method mainly includes three steps: (i) a visual dictionary is obtained based on the deep features extracted by pre-trained DCNNs; (ii) urban words are learned from labeled images; (iii) the urban regions are detected in a new image based on the nearest dictionary word criterion. The qualitative and quantitative experiments on different datasets demonstrate that the proposed method can obtain a remarkable overall accuracy (OA) and kappa coefficient. Moreover, it can also strike a good balance between the true positive rate (TPR) and false positive rate (FPR).

  13. The Aeroflex: a bicycle for mobile air quality measurements.

    PubMed

    Elen, Bart; Peters, Jan; Poppel, Martine Van; Bleux, Nico; Theunis, Jan; Reggente, Matteo; Standaert, Arnout

    2012-12-24

    Fixed air quality stations have limitations when used to assess people's real life exposure to air pollutants. Their spatial coverage is too limited to capture the spatial variability in, e.g., an urban or industrial environment. Complementary mobile air quality measurements can be used as an additional tool to fill this void. In this publication we present the Aeroflex, a bicycle for mobile air quality monitoring. The Aeroflex is equipped with compact air quality measurement devices to monitor ultrafine particle number counts, particulate mass and black carbon concentrations at a high resolution (up to 1 second). Each measurement is automatically linked to its geographical location and time of acquisition using GPS and Internet time. Furthermore, the Aeroflex is equipped with automated data transmission, data pre-processing and data visualization. The Aeroflex is designed with adaptability, reliability and user friendliness in mind. Over the past years, the Aeroflex has been successfully used for high resolution air quality mapping, exposure assessment and hot spot identification. 

  14. Classification Accuracy Increase Using Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.

  15. UV-laser-based microscopic dissection of tree rings - a novel sampling tool for δ(13) C and δ(18) O studies.

    PubMed

    Schollaen, Karina; Heinrich, Ingo; Helle, Gerhard

    2014-02-01

    UV-laser-based microscopic systems were utilized to dissect and sample organic tissue for stable isotope measurements from thin wood cross-sections. We tested UV-laser-based microscopic tissue dissection in practice for high-resolution isotopic analyses (δ(13) C/δ(18) O) on thin cross-sections from different tree species. The method allows serial isolation of tissue of any shape and from millimetre down to micrometre scales. On-screen pre-defined areas of interest were automatically dissected and collected for mass spectrometric analysis. Three examples of high-resolution isotopic analyses revealed that: in comparison to δ(13) C of xylem cells, woody ray parenchyma of deciduous trees have the same year-to-year variability, but reveal offsets that are opposite in sign depending on whether wholewood or cellulose is considered; high-resolution tree-ring δ(18) O profiles of Indonesian teak reflect monsoonal rainfall patterns and are sensitive to rainfall extremes caused by ENSO; and seasonal moisture signals in intra-tree-ring δ(18) O of white pine are weighted by nonlinear intra-annual growth dynamics. The applications demonstrate that the use of UV-laser-based microscopic dissection allows for sampling plant tissue at ultrahigh resolution and unprecedented precision. This new technique facilitates sampling for stable isotope analysis of anatomical plant traits like combined tree eco-physiological, wood anatomical and dendroclimatological studies. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  16. Kite aerial photography for low-cost, ultra-high spatial resolution multi-spectral mapping of intertidal landscapes.

    PubMed

    Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J; Bongiorno, Daniel

    2013-01-01

    Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.

  17. Kite Aerial Photography for Low-Cost, Ultra-high Spatial Resolution Multi-Spectral Mapping of Intertidal Landscapes

    PubMed Central

    Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J.; Bongiorno, Daniel

    2013-01-01

    Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales. PMID:24069206

  18. High spatial resolution imaging for structural health monitoring based on virtual time reversal

    NASA Astrophysics Data System (ADS)

    Cai, Jian; Shi, Lihua; Yuan, Shenfang; Shao, Zhixue

    2011-05-01

    Lamb waves are widely used in structural health monitoring (SHM) of plate-like structures. Due to the dispersion effect, Lamb wavepackets will be elongated and the resolution for damage identification will be strongly affected. This effect can be automatically compensated by the time reversal process (TRP). However, the time information of the compensated waves is also removed at the same time. To improve the spatial resolution of Lamb wave detection, virtual time reversal (VTR) is presented in this paper. In VTR, a changing-element excitation and reception mechanism (CERM) rather than the traditional fixed excitation and reception mechanism (FERM) is adopted for time information conservation. Furthermore, the complicated TRP procedure is replaced by simple signal operations which can make savings in the hardware cost for recording and generating the time-reversed Lamb waves. After the effects of VTR for dispersive damage scattered signals are theoretically analyzed, the realization of VTR involving the acquisition of the transfer functions of damage detecting paths under step pulse excitation is discussed. Then, a VTR-based imaging method is developed to improve the spatial resolution of the delay-and-sum imaging with a sparse piezoelectric (PZT) wafer array. Experimental validation indicates that the damage scattered wavepackets of A0 mode in an aluminum plate are partly recompressed and focalized with their time information preserved by VTR. Both the single damage and the dual adjacent damages in the plate can be clearly displayed with high spatial resolution by the proposed VTR-based imaging method.

  19. Radar image processing of real aperture SLAR data for the detection and identification of iceberg and ship targets

    NASA Technical Reports Server (NTRS)

    Marthaler, J. G.; Heighway, J. E.

    1979-01-01

    An iceberg detection and identification system consisting of a moderate resolution Side Looking Airborne Radar (SLAR) interfaced with a Radar Image Processor (RIP) based on a ROLM 1664 computer with a 32K core memory updatable to 64K is described. The system can be operated in high- or low-resolution sampling modes. Specifically designed algorithms are applied to digitized signal returns to provide automatic target detection and location, geometrically correct video image display and data recording. The real aperture Motorola AN/APS-94D SLAR operates in the X-band and is tunable between 9.10 and 9.40 GHz; its output power is 45 kW peak with a pulse repetition rate of 750 pulses per hour. Schematic diagrams of the system are provided, together with preliminary test data.

  20. Automated pinhole-aperture diagnostic for the current profiling of TWT electron beams

    NASA Astrophysics Data System (ADS)

    Wei, Yu-Xiang; Huang, Ming-Guang; Liu, Shu-Qing; Liu, Jin-Yue; Hao, Bao-Liang; Du, Chao-Hai; Liu, Pu-Kun

    2013-02-01

    The measurement system reported here is intended for use in determining the current density distribution of electron beams from Pierce guns for use in TWTs. The system was designed to automatically scan the cross section of the electron beam and collect the high-resolution data with a Faraday cup probe mounted on a multistage manipulator using the LabVIEW program. A 0.06 mm thick molybdenum plate with a pinhole and a Faraday cup mounted as a probe assembly was employed to sample the electron beam current with 0.5 µm space resolution. The thermal analysis of the probe with pulse beam heating was discussed. A 0.45 µP electron gun with the expected minimum beam radius 0.42 mm was measured and the three-dimensional current density distribution, beam envelope and phase space were presented.

  1. Multibeam monopulse radar for airborne sense and avoid system

    NASA Astrophysics Data System (ADS)

    Gorwara, Ashok; Molchanov, Pavlo

    2016-10-01

    The multibeam monopulse radar for Airborne Based Sense and Avoid (ABSAA) system concept is the next step in the development of passive monopulse direction finder proposed by Stephen E. Lipsky in the 80s. In the proposed system the multibeam monopulse radar with an array of directional antennas is positioned on a small aircaraft or Unmanned Aircraft System (UAS). Radar signals are simultaneously transmitted and received by multiple angle shifted directional antennas with overlapping antenna patterns and the entire sky, 360° for both horizontal and vertical coverage. Digitizing of amplitude and phase of signals in separate directional antennas relative to reference signals provides high-accuracy high-resolution range and azimuth measurement and allows to record real time amplitude and phase of reflected from non-cooperative aircraft signals. High resolution range and azimuth measurement provides minimal tracking errors in both position and velocity of non-cooperative aircraft and determined by sampling frequency of the digitizer. High speed sampling with high-accuracy processor clock provides high resolution phase/time domain measurement even for directional antennas with wide Field of View (FOV). Fourier transform (frequency domain processing) of received radar signals provides signatures and dramatically increases probability of detection for non-cooperative aircraft. Steering of transmitting power and integration, correlation period of received reflected signals for separate antennas (directions) allows dramatically decreased ground clutter for low altitude flights. An open architecture, modular construction allows the combination of a radar sensor with Automatic Dependent Surveillance - Broadcast (ADS-B), electro-optic, acoustic sensors.

  2. Endoluminal ultrasound applicator with an integrated RF coil for high-resolution magnetic resonance imaging-guided high-intensity contact ultrasound thermotherapy

    NASA Astrophysics Data System (ADS)

    Rata, Mihaela; Salomir, Rares; Umathum, Reiner; Jenne, Jürgen; Lafon, Cyril; Cotton, François; Bock, Michael

    2008-11-01

    High-intensity contact ultrasound (HICU) under MRI guidance may provide minimally invasive treatment of endocavitary digestive tumors in the esophagus, colon or rectum. In this study, a miniature receive-only coil was integrated into an endoscopic ultrasound applicator to offer high-resolution MRI guidance of thermotherapy. A cylindrical plastic support with an incorporated single element flat transducer (9.45 MHz, water cooling tip) was made and equipped with a rectangular RF loop coil surrounding the active element. The integrated coil provided significantly higher sensitivity than a four-element extracorporeal phased array coil, and the standard deviation of the MR thermometry (SDT) improved up to a factor of 7 at 10 mm depth in tissue. High-resolution morphological images (T1w-TFE and IR-T1w-TSE with a voxel size of 0.25 × 0.25 × 3 mm3) and accurate thermometry data (the PRFS method with a voxel size of 0.5 × 0.5 × 5 mm3, 2.2 s/image, 0.3 °C voxel-wise SDT) were acquired in an ex vivo esophagus sample, on a clinical 1.5T scanner. The endoscopic device was actively operated under automatic temperature control, demonstrating a high level of accuracy (1.7% standard deviation, 1.1% error of mean value), which indicates that this technology may be suitable for HICU therapy of endoluminal cancer.

  3. Processes in the Resolution of Ambiguous Words: Towards a Model of Selective Inhibition. Cognitive Science Program, Technical Report No. 86-6.

    ERIC Educational Resources Information Center

    Yee, Penny L.

    This study investigates the role of specific inhibitory processes in lexical ambiguity resolution. An attentional view of inhibition and a view based on specific automatic inhibition between nodes predict different results when a neutral item is processed between an ambiguous word and a related target. Subjects were 32 English speakers with normal…

  4. Carotid stenosis assessment with multi-detector CT angiography: comparison between manual and automatic segmentation methods.

    PubMed

    Zhu, Chengcheng; Patterson, Andrew J; Thomas, Owen M; Sadat, Umar; Graves, Martin J; Gillard, Jonathan H

    2013-04-01

    Luminal stenosis is used for selecting the optimal management strategy for patients with carotid artery disease. The aim of this study is to evaluate the reproducibility of carotid stenosis quantification using manual and automated segmentation methods using submillimeter through-plane resolution Multi-Detector CT angiography (MDCTA). 35 patients having carotid artery disease with >30 % luminal stenosis as identified by carotid duplex imaging underwent contrast enhanced MDCTA. Two experienced CT readers quantified carotid stenosis from axial source images, reconstructed maximum intensity projection (MIP) and 3D-carotid geometry which was automatically segmented by an open-source toolkit (Vascular Modelling Toolkit, VMTK) using NASCET criteria. Good agreement among the measurement using axial images, MIP and automatic segmentation was observed. Automatic segmentation methods show better inter-observer agreement between the readers (intra-class correlation coefficient (ICC): 0.99 for diameter stenosis measurement) than manual measurement of axial (ICC = 0.82) and MIP (ICC = 0.86) images. Carotid stenosis quantification using an automatic segmentation method has higher reproducibility compared with manual methods.

  5. How Attention Affects Spatial Resolution

    PubMed Central

    Carrasco, Marisa; Barbot, Antoine

    2015-01-01

    We summarize and discuss a series of psychophysical studies on the effects of spatial covert attention on spatial resolution, our ability to discriminate fine patterns. Heightened resolution is beneficial in most, but not all, visual tasks. We show how endogenous attention (voluntary, goal driven) and exogenous attention (involuntary, stimulus driven) affect performance on a variety of tasks mediated by spatial resolution, such as visual search, crowding, acuity, and texture segmentation. Exogenous attention is an automatic mechanism that increases resolution regardless of whether it helps or hinders performance. In contrast, endogenous attention flexibly adjusts resolution to optimize performance according to task demands. We illustrate how psychophysical studies can reveal the underlying mechanisms of these effects and allow us to draw linking hypotheses with known neurophysiological effects of attention. PMID:25948640

  6. Urban Boundary Extraction and Urban Sprawl Measurement Using High-Resolution Remote Sensing Images: a Case Study of China's Provincial

    NASA Astrophysics Data System (ADS)

    Wang, H.; Ning, X.; Zhang, H.; Liu, Y.; Yu, F.

    2018-04-01

    Urban boundary is an important indicator for urban sprawl analysis. However, methods of urban boundary extraction were inconsistent, and construction land or urban impervious surfaces was usually used to represent urban areas with coarse-resolution images, resulting in lower precision and incomparable urban boundary products. To solve above problems, a semi-automatic method of urban boundary extraction was proposed by using high-resolution image and geographic information data. Urban landscape and form characteristics, geographical knowledge were combined to generate a series of standardized rules for urban boundary extraction. Urban boundaries of China's 31 provincial capitals in year 2000, 2005, 2010 and 2015 were extracted with above-mentioned method. Compared with other two open urban boundary products, accuracy of urban boundary in this study was the highest. Urban boundary, together with other thematic data, were integrated to measure and analyse urban sprawl. Results showed that China's provincial capitals had undergone a rapid urbanization from year 2000 to 2015, with the area change from 6520 square kilometres to 12398 square kilometres. Urban area of provincial capital had a remarkable region difference and a high degree of concentration. Urban land became more intensive in general. Urban sprawl rate showed inharmonious with population growth rate. About sixty percent of the new urban areas came from cultivated land. The paper provided a consistent method of urban boundary extraction and urban sprawl measurement using high-resolution remote sensing images. The result of urban sprawl of China's provincial capital provided valuable urbanization information for government and public.

  7. Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback

    NASA Astrophysics Data System (ADS)

    Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai

    2012-01-01

    With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.

  8. Automated detection of ice cliffs within supraglacial debris cover

    NASA Astrophysics Data System (ADS)

    Herreid, Sam; Pellicciotti, Francesca

    2018-05-01

    Ice cliffs within a supraglacial debris cover have been identified as a source for high ablation relative to the surrounding debris-covered area. Due to their small relative size and steep orientation, ice cliffs are difficult to detect using nadir-looking space borne sensors. The method presented here uses surface slopes calculated from digital elevation model (DEM) data to map ice cliff geometry and produce an ice cliff probability map. Surface slope thresholds, which can be sensitive to geographic location and/or data quality, are selected automatically. The method also attempts to include area at the (often narrowing) ends of ice cliffs which could otherwise be neglected due to signal saturation in surface slope data. The method was calibrated in the eastern Alaska Range, Alaska, USA, against a control ice cliff dataset derived from high-resolution visible and thermal data. Using the same input parameter set that performed best in Alaska, the method was tested against ice cliffs manually mapped in the Khumbu Himal, Nepal. Our results suggest the method can accommodate different glaciological settings and different DEM data sources without a data intensive (high-resolution, multi-data source) recalibration.

  9. Automatic Building Detection based on Supervised Classification using High Resolution Google Earth Images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, S.; Ghaffarian, S.

    2014-08-01

    This paper presents a novel approach to detect the buildings by automization of the training area collecting stage for supervised classification. The method based on the fact that a 3d building structure should cast a shadow under suitable imaging conditions. Therefore, the methodology begins with the detection and masking out the shadow areas using luminance component of the LAB color space, which indicates the lightness of the image, and a novel double thresholding technique. Further, the training areas for supervised classification are selected by automatically determining a buffer zone on each building whose shadow is detected by using the shadow shape and the sun illumination direction. Thereafter, by calculating the statistic values of each buffer zone which is collected from the building areas the Improved Parallelepiped Supervised Classification is executed to detect the buildings. Standard deviation thresholding applied to the Parallelepiped classification method to improve its accuracy. Finally, simple morphological operations conducted for releasing the noises and increasing the accuracy of the results. The experiments were performed on set of high resolution Google Earth images. The performance of the proposed approach was assessed by comparing the results of the proposed approach with the reference data by using well-known quality measurements (Precision, Recall and F1-score) to evaluate the pixel-based and object-based performances of the proposed approach. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.4 % and 853 % overall pixel-based and object-based precision performances, respectively.

  10. a Novel Image Acquisition and Processing Procedure for Fast Tunnel Dsm Production

    NASA Astrophysics Data System (ADS)

    Roncella, R.; Umili, G.; Forlani, G.

    2012-07-01

    In mining operations the evaluation of the stability condition of the excavated front are critic to ensure a safe and correct planning of the subsequent activities. The procedure currently used to this aim has some shortcomings: safety for the geologist, completeness of data collection and objective documentation of the results. In the last decade it has been shown that the geostructural parameters necessary to the stability analysis can be derived from high resolution digital surface models (DSM) of rock faces. With the objective to overcome the limitation of the traditional survey and to minimize data capture times, so reducing delays on mining site operations, a photogrammetric system to generate high resolution DSM of tunnels has been realized. A fast, effective and complete data capture method has been developed and the orientation and restitution phases have been largely automated. The survey operations take no more than required to the traditional ones; no additional topographic measurements other than those available are required. To make the data processing fast and economic our Structure from Motion procedure has been slightly modified to adapt to the peculiar block geometry while, the DSM of the tunnel is created using automatic image correlation techniques. The geomechanical data are sampled on the DSM, by using the acquired images in a GUI and a segmentation procedure to select discontinuity planes. To allow an easier and faster identification of relevant features of the surface of the tunnel, using again an automatic procedure, an orthophoto of the tunnel is produced. A case study where a tunnel section of ca. 130 m has been surveyed is presented.

  11. A scale self-adapting segmentation approach and knowledge transfer for automatically updating land use/cover change databases using high spatial resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Zhihua; Yang, Xiaomei; Lu, Chen; Yang, Fengshuo

    2018-07-01

    Automatic updating of land use/cover change (LUCC) databases using high spatial resolution images (HSRI) is important for environmental monitoring and policy making, especially for coastal areas that connect the land and coast and that tend to change frequently. Many object-based change detection methods are proposed, especially those combining historical LUCC with HSRI. However, the scale parameter(s) segmenting the serial temporal images, which directly determines the average object size, is hard to choose without experts' intervention. And the samples transferred from historical LUCC also need experts' intervention to avoid insufficient or wrong samples. With respect to the scale parameter(s) choosing, a Scale Self-Adapting Segmentation (SSAS) approach based on the exponential sampling of a scale parameter and location of the local maximum of a weighted local variance was proposed to determine the scale selection problem when segmenting images constrained by LUCC for detecting changes. With respect to the samples transferring, Knowledge Transfer (KT), a classifier trained on historical images with LUCC and applied in the classification of updated images, was also proposed. Comparison experiments were conducted in a coastal area of Zhujiang, China, using SPOT 5 images acquired in 2005 and 2010. The results reveal that (1) SSAS can segment images more effectively without intervention of experts. (2) KT can also reach the maximum accuracy of samples transfer without experts' intervention. Strategy SSAS + KT would be a good choice if the temporal historical image and LUCC match, and the historical image and updated image are obtained from the same resource.

  12. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image classification activities. Currently, the approach is used only on high resolution optical three-band remote sensing imagery. The feasibility using the approach on other kinds of remote sensing images or involving additional bands in classification will be studied in future.

  13. Digital focusing of OCT images based on scalar diffraction theory and information entropy.

    PubMed

    Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K

    2012-11-01

    This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method.

  14. Automated crystallographic ligand building using the medial axis transform of an electron-density isosurface.

    PubMed

    Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T

    2005-10-01

    Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.

  15. A dense camera network for cropland (CropInsight) - developing high spatiotemporal resolution crop Leaf Area Index (LAI) maps through network images and novel satellite data

    NASA Astrophysics Data System (ADS)

    Kimm, H.; Guan, K.; Luo, Y.; Peng, J.; Mascaro, J.; Peng, B.

    2017-12-01

    Monitoring crop growth conditions is of primary interest to crop yield forecasting, food production assessment, and risk management of individual farmers and agribusiness. Despite its importance, there are limited access to field level crop growth/condition information in the public domain. This scarcity of ground truth data also hampers the use of satellite remote sensing for crop monitoring due to the lack of validation. Here, we introduce a new camera network (CropInsight) to monitor crop phenology, growth, and conditions that are designed for the US Corn Belt landscape. Specifically, this network currently includes 40 sites (20 corn and 20 soybean fields) across southern half of the Champaign County, IL ( 800 km2). Its wide distribution and automatic operation enable the network to capture spatiotemporal variations of crop growth condition continuously at the regional scale. At each site, low-maintenance, and high-resolution RGB digital cameras are set up having a downward view from 4.5 m height to take continuous images. In this study, we will use these images and novel satellite data to construct daily LAI map of the Champaign County at 30 m spatial resolution. First, we will estimate LAI from the camera images and evaluate it using the LAI data collected from LAI-2200 (LI-COR, Lincoln, NE). Second, we will develop relationships between the camera-based LAI estimation and vegetation indices derived from a newly developed MODIS-Landsat fusion product (daily, 30 m resolution, RGB + NIR + SWIR bands) and the Planet Lab's high-resolution satellite data (daily, 5 meter, RGB). Finally, we will scale up the above relationships to generate high spatiotemporal resolution crop LAI map for the whole Champaign County. The proposed work has potentials to expand to other agro-ecosystems and to the broader US Corn Belt.

  16. Low-Cost Ultra-High Spatial and Temporal Resolution Mapping of Intertidal Rock Platforms

    NASA Astrophysics Data System (ADS)

    Bryson, M.; Johnson-Roberson, M.; Murphy, R.

    2012-07-01

    Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time which could compliment field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at relatively course, sub-meter resolutions or with limited temporal resolutions and relatively high costs for small-scale environmental science and ecology studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric pipeline that was developed for constructing highresolution, 3D, photo-realistic terrain models of intertidal rocky shores. The processing pipeline uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine colour and topographic information at sub-centimeter resolutions over an area of approximately 100m, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rock platform at Cape Banks, Sydney, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.

  17. SU-E-CAMPUS-I-04: Automatic Skin-Dose Mapping for An Angiographic System with a Region-Of-Interest, High-Resolution Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayan, S; Rana, V; Setlur Nagesh, S

    2014-06-15

    Purpose: Our real-time skin dose tracking system (DTS) has been upgraded to monitor dose for the micro-angiographic fluoroscope (MAF), a high-resolution, small field-of-view x-ray detector. Methods: The MAF has been mounted on a changer on a clinical C-Arm gantry so it can be used interchangeably with the standard flat-panel detector (FPD) during neuro-interventional procedures when high resolution is needed in a region-of-interest. To monitor patient skin dose when using the MAF, our DTS has been modified to automatically account for the change in scatter for the very small MAF FOV and to provide separated dose distributions for each detector. Themore » DTS is able to provide a color-coded mapping of the cumulative skin dose on a 3D graphic model of the patient. To determine the correct entrance skin exposure to be applied by the DTS, a correction factor was determined by measuring the exposure at the entrance surface of a skull phantom with an ionization chamber as a function of entrance beam size for various beam filters and kVps. Entrance exposure measurements included primary radiation, patient backscatter and table forward scatter. To allow separation of the dose from each detector, a parameter log is kept that allows a replay of the procedure exposure events and recalculation of the dose components.The graphic display can then be constructed showing the dose distribution from the MAF and FPD separately or together. Results: The DTS is able to provide separate displays of dose for the MAF and FPD with field-size specific scatter corrections. These measured corrections change from about 49% down to 10% when changing from the FPD to the MAF. Conclusion: The upgraded DTS allows identification of the patient skin dose delivered when using each detector in order to achieve improved dose management as well as to facilitate peak skin-dose reduction through dose spreading. Research supported in part by Toshiba Medical Systems Corporation and NIH Grants R43FD0158401, R44FD0158402 and R01EB002873.« less

  18. Near Real Time Applications for Maritime Situational Awareness

    NASA Astrophysics Data System (ADS)

    Schwarz, E.; Krause, D.; Berg, M.; Daedelow, H.; Maass, H.

    2015-04-01

    Applications to derive maritime value added products like oil spill and ship detection based on remote sensing SAR image data are being developed and integrated at the Ground Station Neustrelitz, part of the German Remote Sensing Data Center. Products of meteo-marine parameters like wind and wave will complement the product portfolio. Research and development aim at the implementation of highly automated services for operational use. SAR images are being used because of the possibility to provide maritime products with high spatial resolution over wide swaths and under all weather conditions. In combination with other information like Automatic Identification System (AIS) data fusion products are available to support the Maritime Situational Awareness.

  19. Program Package for the Analysis of High Resolution High Signal-To-Noise Stellar Spectra

    NASA Astrophysics Data System (ADS)

    Piskunov, N.; Ryabchikova, T.; Pakhomov, Yu.; Sitnova, T.; Alekseeva, S.; Mashonkina, L.; Nordlander, T.

    2017-06-01

    The program package SME (Spectroscopy Made Easy), designed to perform an analysis of stellar spectra using spectral fitting techniques, was updated due to adding new functions (isotopic and hyperfine splittins) in VALD and including grids of NLTE calculations for energy levels of few chemical elements. SME allows to derive automatically stellar atmospheric parameters: effective temperature, surface gravity, chemical abundances, radial and rotational velocities, turbulent velocities, taking into account all the effects defining spectral line formation. SME package uses the best grids of stellar atmospheres that allows us to perform spectral analysis with the similar accuracy in wide range of stellar parameters and metallicities - from dwarfs to giants of BAFGK spectral classes.

  20. The New Instrument Suite of the TSU/Fairborn 2m Automatic Spectroscopic Telescope

    NASA Astrophysics Data System (ADS)

    Muterspaugh, Matthew W.; Maxwell, T.; Williamson, M. W.; Fekel, F. C.; Ge, J.; Kelly, J.; Ghasempour, A.; Powell, S.; Zhao, B.; Varosi, F.; Schofield, S.; Liu, J.; Warner, C.; Jakeman, H.; Avner, L.; Swihart, S.; Harrison, C.; Fishler, D.

    2014-01-01

    Tied with the Liverpool Telescope as the world's largest fully robotic optical research telescope, Tennessee State University's (TSU) 2m Automatic Spectroscopic Telescope (AST) has recently been upgraded to improve performance and increase versatility by supporting multiple instruments. Its second-generation instrument head enables us to rapidly switch between any of up to twelve fibers optics, each of which can supply light to a different instrument. In 2013 construction was completed on a new temperature-controlled guest instrument building, and two new high resolution spectrographs were commissioned. The current set of instrumentation includes (1) the telescope's original R=30,000 echelle spectrograph (0.38--0.83 microns simultaneous), (2) a single order R=7,000 spectrograph centered at Ca H&K features, (3) a single-mode-fiber fed miniature echelle spectrograph (R=100,000; 0.48--0.62 microns simultaneous), (4) the University of Florida's EXPERT-3 spectrograph (R=100,000; 0.38--0.9 microns simultaneous; vacuum and temperature controlled) and (5) the University of Florida's FIRST spectrograph (R=70,000$; 0.8--1.35 or 1.4--1.8 microns simultaneous; vacuum and temperature controlled). Future instruments include the Externally Dispersed Interferometry (EDI) Testbed, a combination low resolution dispersed spectrograph and Fourier Transform Spectrograph. We welcome inquiries from the community in regards to observing access and/or proposals for future guest instruments.

  1. Automatic Speech Recognition from Neural Signals: A Focused Review.

    PubMed

    Herff, Christian; Schultz, Tanja

    2016-01-01

    Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e., patients suffering from locked-in syndrome). For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people. This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography). As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the Brain-to-text system.

  2. High Resolution BPM Upgrade for the ATF Damping Ring at KEK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eddy, N.; Briegel, C.; Fellenz, B.

    2011-08-17

    A beam position monitor (BPM) upgrade at the KEK Accelerator Test Facility (ATF) damping ring has been accomplished, carried out by a KEK/FNAL/SLAC collaboration under the umbrella of the global ILC R&D effort. The upgrade consists of a high resolution, high reproducibility read-out system, based on analog and digital down-conversion techniques, digital signal processing, and also implements a new automatic gain error correction schema. The technical concept and realization as well as results of beam studies are presented. The next generation of linear colliders require ultra-low vertical emittance of <2 pm-rad. The damping ring at the KEK Accelerator Test Facilitymore » (ATF) is designed to demonstrate this mission critical goal. A high resolution beam position monitor (BPM) system for the damping ring is one of the key tools for realizing this goal. The BPM system needs to provide two distnict measurements. First, a very high resolution ({approx}100-200nm) closed-orbit measurement which is averaged over many turns and realized with narrowband filter techniques - 'narrowband mode'. This is needed to monitor and steer the beam along an optimum orbit and to facilitate beam-based alignment to minimize non-linear field effects. Second, is the ability to make turn by turn (TBT) measurements to support optics studies and corrections necessary to achieve the design performance. As the TBT measurement necessitates a wider bandwidth, it is often referred to as 'wideband mode'. The BPM upgrade was initiated as a KEK/SLAC/FNAL collaboration in the frame of the Global Design Initiative of the International Linear Collider. The project was realized and completed using Japan-US funds with Fermilab as the core partner.« less

  3. High-resolution myocardial T1 mapping using single-shot inversion recovery fast low-angle shot MRI with radial undersampling and iterative reconstruction

    PubMed Central

    Joseph, Arun A; Kalentev, Oleksandr; Merboldt, Klaus-Dietmar; Voit, Dirk; Roeloffs, Volkert B; van Zalk, Maaike; Frahm, Jens

    2016-01-01

    Objective: To develop a novel method for rapid myocardial T1 mapping at high spatial resolution. Methods: The proposed strategy represents a single-shot inversion recovery experiment triggered to early diastole during a brief breath-hold. The measurement combines an adiabatic inversion pulse with a real-time readout by highly undersampled radial FLASH, iterative image reconstruction and T1 fitting with automatic deletion of systolic frames. The method was implemented on a 3-T MRI system using a graphics processing unit-equipped bypass computer for online application. Validations employed a T1 reference phantom including analyses at simulated heart rates from 40 to 100 beats per minute. In vivo applications involved myocardial T1 mapping in short-axis views of healthy young volunteers. Results: At 1-mm in-plane resolution and 6-mm section thickness, the inversion recovery measurement could be shortened to 3 s without compromising T1 quantitation. Phantom studies demonstrated T1 accuracy and high precision for values ranging from 300 to 1500 ms and up to a heart rate of 100 beats per minute. Similar results were obtained in vivo yielding septal T1 values of 1246 ± 24 ms (base), 1256 ± 33 ms (mid-ventricular) and 1288 ± 30 ms (apex), respectively (mean ± standard deviation, n = 6). Conclusion: Diastolic myocardial T1 mapping with use of single-shot inversion recovery FLASH offers high spatial resolution, T1 accuracy and precision, and practical robustness and speed. Advances in knowledge: The proposed method will be beneficial for clinical applications relying on native and post-contrast T1 quantitation. PMID:27759423

  4. An improved RST approach for timely alert and Near Real Time monitoring of oil spill disasters by using AVHRR data

    NASA Astrophysics Data System (ADS)

    Grimaldi, C. S. L.; Casciello, D.; Coviello, I.; Lacava, T.; Pergola, N.; Tramutoli, V.

    2011-05-01

    Information acquired and provided in Near Real Time is fundamental in contributing to reduce the impact of different sea pollution sources on the maritime environment. Optical data acquired by sensors aboard meteorological satellites, thanks to their high temporal resolution as well as to their delivery policy, can be profitably used for a Near Real Time sea monitoring, provided that accurate and reliable methodologies for analysis and investigation are designed, implemented and fully assessed. In this paper, the results achieved by the application of an improved version of RST (Robust Satellite Technique) to oil spill detection and monitoring will be shown. In particular, thermal infrared data acquired by the NOAA-AVHRR (National Oceanic and Atmospheric Administration-Advanced Very High Resolution Radiometer) have been analyzed and a new RST-based change detection index applied to the case of the oil spills that occurred off the Kuwait and Saudi Arabian coasts in January 1991 and during the Lebanon War in July 2006. The results obtained, even in comparison with those achieved by other AVHRR-based techniques, confirm the unique performance of the proposed approach in automatically detecting the presence of oil spill with a high level of reliability and sensitivity. Moreover, the potential of the extension of the proposed technique to sensors onboard geostationary satellites will be discussed within the context of oil spill monitoring systems, integrating products generated by high temporal (optical) and high spatial (radar) resolution satellite systems.

  5. Automatic detection of pelvic lymph nodes using multiple MR sequences

    NASA Astrophysics Data System (ADS)

    Yan, Michelle; Lu, Yue; Lu, Renzhi; Requardt, Martin; Moeller, Thomas; Takahashi, Satoru; Barentsz, Jelle

    2007-03-01

    A system for automatic detection of pelvic lymph nodes is developed by incorporating complementary information extracted from multiple MR sequences. A single MR sequence lacks sufficient diagnostic information for lymph node localization and staging. Correct diagnosis often requires input from multiple complementary sequences which makes manual detection of lymph nodes very labor intensive. Small lymph nodes are often missed even by highly-trained radiologists. The proposed system is aimed at assisting radiologists in finding lymph nodes faster and more accurately. To the best of our knowledge, this is the first such system reported in the literature. A 3-dimensional (3D) MR angiography (MRA) image is employed for extracting blood vessels that serve as a guide in searching for pelvic lymph nodes. Segmentation, shape and location analysis of potential lymph nodes are then performed using a high resolution 3D T1-weighted VIBE (T1-vibe) MR sequence acquired by Siemens 3T scanner. An optional contrast-agent enhanced MR image, such as post ferumoxtran-10 T2*-weighted MEDIC sequence, can also be incorporated to further improve detection accuracy of malignant nodes. The system outputs a list of potential lymph node locations that are overlaid onto the corresponding MR sequences and presents them to users with associated confidence levels as well as their sizes and lengths in each axis. Preliminary studies demonstrates the feasibility of automatic lymph node detection and scenarios in which this system may be used to assist radiologists in diagnosis and reporting.

  6. Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.

    PubMed

    Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao

    2017-09-18

    High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  7. Detection of buried magnetic objects by a SQUID gradiometer system

    NASA Astrophysics Data System (ADS)

    Meyer, Hans-Georg; Hartung, Konrad; Linzen, Sven; Schneider, Michael; Stolz, Ronny; Fried, Wolfgang; Hauspurg, Sebastian

    2009-05-01

    We present a magnetic detection system based on superconducting gradiometric sensors (SQUID gradiometers). The system provides a unique fast mapping of large areas with a high resolution of the magnetic field gradient as well as the local position. A main part of this work is the localization and classification of magnetic objects in the ground by automatic interpretation of geomagnetic field gradients, measured by the SQUID system. In accordance with specific features the field is decomposed into segments, which allow inferences to possible objects in the ground. The global consideration of object describing properties and their optimization using error minimization methods allows the reconstruction of superimposed features and detection of buried objects. The analysis system of measured geomagnetic fields works fully automatically. By a given surface of area-measured gradients the algorithm determines within numerical limits the absolute position of objects including depth with sub-pixel accuracy and allows an arbitrary position and attitude of sources. Several SQUID gradiometer data sets were used to show the applicability of the analysis algorithm.

  8. Research on Remote Sensing Geological Information Extraction Based on Object Oriented Classification

    NASA Astrophysics Data System (ADS)

    Gao, Hui

    2018-04-01

    The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  9. Computer-based route-definition system for peripheral bronchoscopy.

    PubMed

    Graham, Michael W; Gibbs, Jason D; Higgins, William E

    2012-04-01

    Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.

  10. Automatic Surveying For Hazard Prevention On Glacier De GiÉtro, Switzerland

    NASA Astrophysics Data System (ADS)

    Bauder, A.; Funk, M.; Bösch, H.

    Breaking off of large ice masses from the steep tongue of Glacier de Giétro may endanger a nearby reservoir. Such a falling ice mass could cause an oversplash over the dam at timeof a nearly filled lake. For this reason the glacier has been monitored intensively since the 1960's. An automatic theodolite was installed three years ago. It allows continuous displacement measurements of several targets on the glacier in order to detect short-term acceleration events. The installation includes a telemetric data transmission, which provides for immediate recognition of hazardous situations and early alarming. The obtained data were analysed in terms of precision and performance of the applied method. A high temporal resolution was gained. The comparison with traditional ob- servations shows clearly the potential of modern instruments to improve monitoring schems. We summarize the main results of this study and discuss the applicability of a modern motorized theodolite with target tracking and recognition ability for moni- toring purposes.

  11. Completing fishing monitoring with spaceborne Vessel Detection System (VDS) and Automatic Identification System (AIS) to assess illegal fishing in Indonesia.

    PubMed

    Longépé, Nicolas; Hajduch, Guillaume; Ardianto, Romy; Joux, Romain de; Nhunfat, Béatrice; Marzuki, Marza I; Fablet, Ronan; Hermawan, Indra; Germain, Olivier; Subki, Berny A; Farhan, Riza; Muttaqin, Ahmad Deni; Gaspar, Philippe

    2017-10-26

    The Indonesian fisheries management system is now equipped with the state-of-the-art technologies to deter and combat Illegal, Unreported and Unregulated (IUU) fishing. Since October 2014, non-cooperative fishing vessels can be detected from spaceborne Vessel Detection System (VDS) based on high resolution radar imagery, which directly benefits to coordinated patrol vessels in operation context. This study attempts to monitor the amount of illegal fishing in the Arafura Sea based on this new source of information. It is analyzed together with Vessel Monitoring System (VMS) and satellite-based Automatic Identification System (Sat-AIS) data, taking into account their own particularities. From October 2014 to March 2015, i.e. just after the establishment of a new moratorium by the Indonesian authorities, the estimated share of fishing vessels not carrying VMS, thus being illegal, ranges from 42 to 47%. One year later in January 2016, this proportion decreases and ranges from 32 to 42%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Development of a high-resolution automatic digital (urine/electrolytes) flow volume and rate measurement system of miniature size

    NASA Technical Reports Server (NTRS)

    Liu, F. F.

    1975-01-01

    To aid in the quantitative analysis of man's physiological rhythms, a flowmeter to measure circadian patterns of electrolyte excretion during various environmental stresses was developed. One initial flowmeter was designed and fabricated, the sensor of which is the approximate size of a wristwatch. The detector section includes a special type of dielectric integrating type sensor which automatically controls, activates, and deactivates the flow sensor data output by determining the presence or absence of fluid flow in the system, including operation under zero-G conditions. The detector also provides qualitative data on the composition of the fluid. A compact electronic system was developed to indicate flow rate as well as total volume per release or the cumulative volume of several releases in digital/analog forms suitable for readout or telemetry. A suitable data readout instrument is also provided. Calibration and statistical analyses of the performance functions required of the flowmeter were also conducted.

  13. Electro-optical imaging systems integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, R.

    1987-01-01

    Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less

  14. Extended axial imaging range, widefield swept source optical coherence tomography angiography.

    PubMed

    Liu, Gangjun; Yang, Jianlong; Wang, Jie; Li, Yan; Zang, Pengxiao; Jia, Yali; Huang, David

    2017-11-01

    We developed a high-speed, swept source OCT system for widefield OCT angiography (OCTA) imaging. The system has an extended axial imaging range of 6.6 mm. An electrical lens is used for fast, automatic focusing. The recently developed split-spectrum amplitude and phase-gradient angiography allow high-resolution OCTA imaging with only two B-scan repetitions. An improved post-processing algorithm effectively removed trigger jitter artifacts and reduced noise in the flow signal. We demonstrated high contrast 3 mm×3 mm OCTA image with 400×400 pixels acquired in 3 seconds and high-definition 8 mm×6 mm and 12 mm×6 mm OCTA images with 850×400 pixels obtained in 4 seconds. A widefield 8 mm×11 mm OCTA image is produced by montaging two 8 mm×6 mm scans. An ultra-widefield (with a maximum of 22 mm along both vertical and horizontal directions) capillary-resolution OCTA image is obtained by montaging six 12 mm×6 mm scans. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. The multiscale nature of magnetic pattern on the solar surface

    NASA Astrophysics Data System (ADS)

    Scardigli, S.; Del Moro, D.; Berrilli, F.

    Multiscale magnetic underdense regions (voids) appear in high resolution magnetograms of quiet solar surface. These regions may be considered a signature of the underlying convective structure. The study of the associated pattern paves the way for the study of turbulent convective scales from granular to global. In order to address the question of magnetic pattern driven by turbulent convection we used a novel automatic void detection method to calculate void distributions. The absence of preferred scales of organization in the calculated distributions supports the multiscale nature of flows on the solar surface and the absence of preferred convective scales.

  16. Earthquake Damage Assessment Using Objective Image Segmentation: A Case Study of 2010 Haiti Earthquake

    NASA Technical Reports Server (NTRS)

    Oommen, Thomas; Rebbapragada, Umaa; Cerminaro, Daniel

    2012-01-01

    In this study, we perform a case study on imagery from the Haiti earthquake that evaluates a novel object-based approach for characterizing earthquake induced surface effects of liquefaction against a traditional pixel based change technique. Our technique, which combines object-oriented change detection with discriminant/categorical functions, shows the power of distinguishing earthquake-induced surface effects from changes in buildings using the object properties concavity, convexity, orthogonality and rectangularity. Our results suggest that object-based analysis holds promise in automatically extracting earthquake-induced damages from high-resolution aerial/satellite imagery.

  17. KML Super Overlay to WMS Translator

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2007-01-01

    This translator is a server-based application that automatically generates KML super overlay configuration files required by Google Earth for map data access via the Open Geospatial Consortium WMS (Web Map Service) standard. The translator uses a set of URL parameters that mirror the WMS parameters as much as possible, and it also can generate a super overlay subdivision of any given area that is only loaded when needed, enabling very large areas of coverage at very high resolutions. It can make almost any dataset available as a WMS service visible and usable in any KML application, without the need to reformat the data.

  18. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  19. ATPP: A Pipeline for Automatic Tractography-Based Brain Parcellation

    PubMed Central

    Li, Hai; Fan, Lingzhong; Zhuo, Junjie; Wang, Jiaojian; Zhang, Yu; Yang, Zhengyi; Jiang, Tianzi

    2017-01-01

    There is a longstanding effort to parcellate brain into areas based on micro-structural, macro-structural, or connectional features, forming various brain atlases. Among them, connectivity-based parcellation gains much emphasis, especially with the considerable progress of multimodal magnetic resonance imaging in the past two decades. The Brainnetome Atlas published recently is such an atlas that follows the framework of connectivity-based parcellation. However, in the construction of the atlas, the deluge of high resolution multimodal MRI data and time-consuming computation poses challenges and there is still short of publically available tools dedicated to parcellation. In this paper, we present an integrated open source pipeline (https://www.nitrc.org/projects/atpp), named Automatic Tractography-based Parcellation Pipeline (ATPP) to realize the framework of parcellation with automatic processing and massive parallel computing. ATPP is developed to have a powerful and flexible command line version, taking multiple regions of interest as input, as well as a user-friendly graphical user interface version for parcellating single region of interest. We demonstrate the two versions by parcellating two brain regions, left precentral gyrus and middle frontal gyrus, on two independent datasets. In addition, ATPP has been successfully utilized and fully validated in a variety of brain regions and the human Brainnetome Atlas, showing the capacity to greatly facilitate brain parcellation. PMID:28611620

  20. Meal Microstructure Characterization from Sensor-Based Food Intake Detection.

    PubMed

    Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A; Higgins, Janine A; Sazonov, Edward

    2017-01-01

    To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1-30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button ( p -value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10-30s ( p -value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary.

  1. Meal Microstructure Characterization from Sensor-Based Food Intake Detection

    PubMed Central

    Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A.; Higgins, Janine A.; Sazonov, Edward

    2017-01-01

    To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1–30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button (p-value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10–30s (p-value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary. PMID:28770206

  2. Image-based red cell counting for wild animals blood.

    PubMed

    Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia

    2010-01-01

    An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.

  3. SpotMetrics: An Open-Source Image-Analysis Software Plugin for Automatic Chromatophore Detection and Measurement

    PubMed Central

    Hadjisolomou, Stavros P.; El-Haddad, George

    2017-01-01

    Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, “SpotMetrics,” that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines. PMID:28298896

  4. Digital focusing of OCT images based on scalar diffraction theory and information entropy

    PubMed Central

    Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K.

    2012-01-01

    This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method. PMID:23162717

  5. Use of noncrystallographic symmetry for automated model building at medium to low resolution.

    PubMed

    Wiegels, Tim; Lamzin, Victor S

    2012-04-01

    A novel method is presented for the automatic detection of noncrystallographic symmetry (NCS) in macromolecular crystal structure determination which does not require the derivation of molecular masks or the segmentation of density. It was found that throughout structure determination the NCS-related parts may be differently pronounced in the electron density. This often results in the modelling of molecular fragments of variable length and accuracy, especially during automated model-building procedures. These fragments were used to identify NCS relations in order to aid automated model building and refinement. In a number of test cases higher completeness and greater accuracy of the obtained structures were achieved, specifically at a crystallographic resolution of 2.3 Å or poorer. In the best case, the method allowed the building of up to 15% more residues automatically and a tripling of the average length of the built fragments.

  6. Advances in Modal Analysis Using a Robust and Multiscale Method

    NASA Astrophysics Data System (ADS)

    Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.

    2010-12-01

    This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  7. Contrast, size, and orientation-invariant target detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Tong; Crawshaw, Richard D.

    1991-08-01

    Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.

  8. Super resolution for astronomical observations

    NASA Astrophysics Data System (ADS)

    Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng

    2018-05-01

    In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.

  9. Benthic Habitat Mapping Using Multispectral High-Resolution Imagery: Evaluation of Shallow Water Atmospheric Correction Techniques.

    PubMed

    Eugenio, Francisco; Marcello, Javier; Martin, Javier; Rodríguez-Esparragón, Dionisio

    2017-11-16

    Remote multispectral data can provide valuable information for monitoring coastal water ecosystems. Specifically, high-resolution satellite-based imaging systems, as WorldView-2 (WV-2), can generate information at spatial scales needed to implement conservation actions for protected littoral zones. However, coastal water-leaving radiance arriving at the space-based sensor is often small as compared to reflected radiance. In this work, complex approaches, which usually use an accurate radiative transfer code to correct the atmospheric effects, such as FLAASH, ATCOR and 6S, have been implemented for high-resolution imagery. They have been assessed in real scenarios using field spectroradiometer data. In this context, the three approaches have achieved excellent results and a slightly superior performance of 6S model-based algorithm has been observed. Finally, for the mapping of benthic habitats in shallow-waters marine protected environments, a relevant application of the proposed atmospheric correction combined with an automatic deglinting procedure is presented. This approach is based on the integration of a linear mixing model of benthic classes within the radiative transfer model of the water. The complete methodology has been applied to selected ecosystems in the Canary Islands (Spain) but the obtained results allow the robust mapping of the spatial distribution and density of seagrass in coastal waters and the analysis of multitemporal variations related to the human activity and climate change in littoral zones.

  10. Benthic Habitat Mapping Using Multispectral High-Resolution Imagery: Evaluation of Shallow Water Atmospheric Correction Techniques

    PubMed Central

    Eugenio, Francisco; Marcello, Javier; Martin, Javier

    2017-01-01

    Remote multispectral data can provide valuable information for monitoring coastal water ecosystems. Specifically, high-resolution satellite-based imaging systems, as WorldView-2 (WV-2), can generate information at spatial scales needed to implement conservation actions for protected littoral zones. However, coastal water-leaving radiance arriving at the space-based sensor is often small as compared to reflected radiance. In this work, complex approaches, which usually use an accurate radiative transfer code to correct the atmospheric effects, such as FLAASH, ATCOR and 6S, have been implemented for high-resolution imagery. They have been assessed in real scenarios using field spectroradiometer data. In this context, the three approaches have achieved excellent results and a slightly superior performance of 6S model-based algorithm has been observed. Finally, for the mapping of benthic habitats in shallow-waters marine protected environments, a relevant application of the proposed atmospheric correction combined with an automatic deglinting procedure is presented. This approach is based on the integration of a linear mixing model of benthic classes within the radiative transfer model of the water. The complete methodology has been applied to selected ecosystems in the Canary Islands (Spain) but the obtained results allow the robust mapping of the spatial distribution and density of seagrass in coastal waters and the analysis of multitemporal variations related to the human activity and climate change in littoral zones. PMID:29144444

  11. Time-Optimized High-Resolution Readout-Segmented Diffusion Tensor Imaging

    PubMed Central

    Reishofer, Gernot; Koschutnig, Karl; Langkammer, Christian; Porter, David; Jehna, Margit; Enzinger, Christian; Keeling, Stephen; Ebner, Franz

    2013-01-01

    Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context. PMID:24019951

  12. Detecting Blind Fault with Fractal and Roughness Factors from High Resolution LiDAR DEM at Taiwan

    NASA Astrophysics Data System (ADS)

    Cheng, Y. S.; Yu, T. T.

    2014-12-01

    There is no obvious fault scarp associated with blind fault. The traditional method of mapping this unrevealed geological structure is the cluster of seismicity. Neither the seismic event nor the completeness of cluster could be captured by network to chart the location of the entire possible active blind fault within short period of time. High resolution DEM gathered by LiDAR could denote actual terrain information despite the existence of plantation. 1-meter interval DEM of mountain region at Taiwan is utilized by fractal, entropy and roughness calculating with MATLAB code. By jointing these handing, the regions of non-sediment deposit are charted automatically. Possible blind fault associated with Chia-Sen earthquake at southern Taiwan is served as testing ground. GIS layer help in removing the difference from various geological formation, then multi-resolution fractal index is computed around the target region. The type of fault movement controls distribution of fractal index number. The scale of blind fault governs degree of change in fractal index. Landslide induced by rainfall and/or earthquake possesses larger degree of geomorphology alteration than blind fault; special treatment in removing these phenomena is required. Highly weathered condition at Taiwan should erase the possible trace remained upon DEM from the ruptured of blind fault while reoccurrence interval is higher than hundreds of years. This is one of the obstacle in finding possible blind fault at Taiwan.

  13. "Performance Of A Wafer Stepper With Automatic Intra-Die Registration Correction."

    NASA Astrophysics Data System (ADS)

    van den Brink, M. A.; Wittekoek, S.; Linders, H. F. D.; van Hout, F. J.; George, R. A.

    1987-01-01

    An evaluation of a wafer stepper with the new improved Philips/ASM-L phase grating alignment system is reported. It is shown that an accurate alignment system needs an accurate X-Y-0 wafer stage and an accurate reticle Z stage to realize optimum overlay accuracy. This follows from a discussion of the overlay budget and an alignment procedure model. The accurate wafer stage permits high overlay accuracy using global alignment only, thus eliminating the throughput penalty of align-by-field schemes. The accurate reticle Z stage enables an intra-die magnification control with respect to the wafer scale. Various overlay data are reported, which have been measured with the automatic metrology program of the stepper. It is demonstrated that the new dual alignment system (with the external spatial filter) has improved the ability to align to weakly reflecting layers. The results are supported by a Fourier analysis of the alignment signal. Resolution data are given for the PAS 2500 projection lenses, which show that the high overlay accuracy of the system is properly matched with submicron linewidth control. The results of a recently introduced 20mm i-line lens with a numerical aperture of 0.4 (Zeiss 10-78-58) are included.

  14. Studying post-etching silicon crystal defects on 300mm wafer by automatic defect review AFM

    NASA Astrophysics Data System (ADS)

    Zandiatashbar, Ardavan; Taylor, Patrick A.; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il

    2016-03-01

    Single crystal silicon wafers are the fundamental elements of semiconductor manufacturing industry. The wafers produced by Czochralski (CZ) process are very high quality single crystalline materials with known defects that are formed during the crystal growth or modified by further processing. While defects can be unfavorable for yield for some manufactured electrical devices, a group of defects like oxide precipitates can have both positive and negative impacts on the final device. The spatial distribution of these defects may be found by scattering techniques. However, due to limitations of scattering (i.e. light wavelength), many crystal defects are either poorly classified or not detected. Therefore a high throughput and accurate characterization of their shape and dimension is essential for reviewing the defects and proper classification. While scanning electron microscopy (SEM) can provide high resolution twodimensional images, atomic force microscopy (AFM) is essential for obtaining three-dimensional information of the defects of interest (DOI) as it is known to provide the highest vertical resolution among all techniques [1]. However AFM's low throughput, limited tip life, and laborious efforts for locating the DOI have been the limitations of this technique for defect review for 300 mm wafers. To address these limitations of AFM, automatic defect review AFM has been introduced recently [2], and is utilized in this work for studying DOI on 300 mm silicon wafer. In this work, we carefully etched a 300 mm silicon wafer with a gaseous acid in a reducing atmosphere at a temperature and for a sufficient duration to decorate and grow the crystal defects to a size capable of being detected as light scattering defects [3]. The etched defects form a shallow structure and their distribution and relative size are inspected by laser light scattering (LLS). However, several groups of defects couldn't be properly sized by the LLS due to the very shallow depth and low light scattering. Likewise, SEM cannot be used effectively for post-inspection defect review and classification of these very shallow types of defects. To verify and obtain accurate shape and three-dimensional information of those defects, automatic defect review AFM (ADR AFM) is utilized for accurate locating and imaging of DOI. In ADR AFM, non-contact mode imaging is used for non-destructive characterization and preserving tip sharpness for data repeatability and reproducibility. Locating DOI and imaging are performed automatically with a throughput of many defects per hour. Topography images of DOI has been collected and compared with SEM images. The ADR AFM has been shown as a non-destructive metrology tool for defect review and obtaining three-dimensional topography information.

  15. High-throughput characterization of film thickness in thin film materials libraries by digital holographic microscopy.

    PubMed

    Lai, Yiu Wai; Krause, Michael; Savan, Alan; Thienhaus, Sigurd; Koukourakis, Nektarios; Hofmann, Martin R; Ludwig, Alfred

    2011-10-01

    A high-throughput characterization technique based on digital holography for mapping film thickness in thin-film materials libraries was developed. Digital holographic microscopy is used for fully automatic measurements of the thickness of patterned films with nanometer resolution. The method has several significant advantages over conventional stylus profilometry: it is contactless and fast, substrate bending is compensated, and the experimental setup is simple. Patterned films prepared by different combinatorial thin-film approaches were characterized to investigate and demonstrate this method. The results show that this technique is valuable for the quick, reliable and high-throughput determination of the film thickness distribution in combinatorial materials research. Importantly, it can also be applied to thin films that have been structured by shadow masking.

  16. AssayR: A Simple Mass Spectrometry Software Tool for Targeted Metabolic and Stable Isotope Tracer Analyses.

    PubMed

    Wills, Jimi; Edwards-Hicks, Joy; Finch, Andrew J

    2017-09-19

    Metabolic analyses generally fall into two classes: unbiased metabolomic analyses and analyses that are targeted toward specific metabolites. Both techniques have been revolutionized by the advent of mass spectrometers with detectors that afford high mass accuracy and resolution, such as time-of-flights (TOFs) and Orbitraps. One particular area where this technology is key is in the field of metabolic flux analysis because the resolution of these spectrometers allows for discrimination between 13 C-containing isotopologues and those containing 15 N or other isotopes. While XCMS-based software is freely available for untargeted analysis of mass spectrometric data sets, it does not always identify metabolites of interest in a targeted assay. Furthermore, there is a paucity of vendor-independent software that deals with targeted analyses of metabolites and of isotopologues in particular. Here, we present AssayR, an R package that takes high resolution wide-scan liquid chromatography-mass spectrometry (LC-MS) data sets and tailors peak detection for each metabolite through a simple, iterative user interface. It automatically integrates peak areas for all isotopologues and outputs extracted ion chromatograms (EICs), absolute and relative stacked bar charts for all isotopologues, and a .csv data file. We demonstrate several examples where AssayR provides more accurate and robust quantitation than XCMS, and we propose that tailored peak detection should be the preferred approach for targeted assays. In summary, AssayR provides easy and robust targeted metabolite and stable isotope analyses on wide-scan data sets from high resolution mass spectrometers.

  17. AssayR: A Simple Mass Spectrometry Software Tool for Targeted Metabolic and Stable Isotope Tracer Analyses

    PubMed Central

    2017-01-01

    Metabolic analyses generally fall into two classes: unbiased metabolomic analyses and analyses that are targeted toward specific metabolites. Both techniques have been revolutionized by the advent of mass spectrometers with detectors that afford high mass accuracy and resolution, such as time-of-flights (TOFs) and Orbitraps. One particular area where this technology is key is in the field of metabolic flux analysis because the resolution of these spectrometers allows for discrimination between 13C-containing isotopologues and those containing 15N or other isotopes. While XCMS-based software is freely available for untargeted analysis of mass spectrometric data sets, it does not always identify metabolites of interest in a targeted assay. Furthermore, there is a paucity of vendor-independent software that deals with targeted analyses of metabolites and of isotopologues in particular. Here, we present AssayR, an R package that takes high resolution wide-scan liquid chromatography–mass spectrometry (LC-MS) data sets and tailors peak detection for each metabolite through a simple, iterative user interface. It automatically integrates peak areas for all isotopologues and outputs extracted ion chromatograms (EICs), absolute and relative stacked bar charts for all isotopologues, and a .csv data file. We demonstrate several examples where AssayR provides more accurate and robust quantitation than XCMS, and we propose that tailored peak detection should be the preferred approach for targeted assays. In summary, AssayR provides easy and robust targeted metabolite and stable isotope analyses on wide-scan data sets from high resolution mass spectrometers. PMID:28850215

  18. Resolution enhancement of robust Bayesian pre-stack inversion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Yin, Xingyao; Li, Kun; Zong, Zhaoyun

    2016-10-01

    AVO/AVA (amplitude variation with an offset or angle) inversion is one of the most practical and useful approaches to estimating model parameters. So far, publications on AVO inversion in the Fourier domain have been quite limited in view of its poor stability and sensitivity to noise compared with time-domain inversion. For the resolution and stability of AVO inversion in the Fourier domain, a novel robust Bayesian pre-stack AVO inversion based on the mixed domain formulation of stationary convolution is proposed which could solve the instability and achieve superior resolution. The Fourier operator will be integrated into the objective equation and it avoids the Fourier inverse transform in our inversion process. Furthermore, the background constraints of model parameters are taken into consideration to improve the stability and reliability of inversion which could compensate for the low-frequency components of seismic signals. Besides, the different frequency components of seismic signals can realize decoupling automatically. This will help us to solve the inverse problem by means of multi-component successive iterations and the convergence precision of the inverse problem could be improved. So, superior resolution compared with the conventional time-domain pre-stack inversion could be achieved easily. Synthetic tests illustrate that the proposed method could achieve high-resolution results with a high degree of agreement with the theoretical model and verify the quality of anti-noise. Finally, applications on a field data case demonstrate that the proposed method could obtain stable inversion results of elastic parameters from pre-stack seismic data in conformity with the real logging data.

  19. Implementation of a high-resolution workstation for primary diagnosis of projection radiography images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Herron, John M.; Maitz, Glenn S.; Gur, David; Miller, Stephen L.; Straub, William H.; Fuhrman, Carl R.

    1990-08-01

    We designed and implemented a high-resolution video workstation as the central hardware component in a comprehensive multi-project program comparing the use of digital and film modalities. The workstation utilizes a 1.8 GByte real-time disk (RCI) capable of storing 400 full-resolution images and two Tektronix (GMA251) display controllers with 19" monitors (GMA2O2). The display is configured in a portrait format with a resolution of 1536 x 2048 x 8 bit, and operates at 75 Hz in a noninterlaced mode. Transmission of data through a 12 to 8 bit lookup table into the display controllers occurs at 20 MBytes/second (.35 seconds per image). The workstation allows easy use of brightness (level) and contrast (window) to be manipulated with a trackball, and various processing options can be selected using push buttons. Display of any of the 400 images is also performed at 20MBytes/sec (.35 sec/image). A separate text display provides for the automatic display of patient history data and for a scoring form through which readers can interact with the system by means of a computer mouse. In addition, the workstation provides for the randomization of cases and for the immediate entry of diagnostic responses into a master database. Over the past year this workstation has been used for over 10,000 readings in diagnostic studies related to 1) image resolution; 2) film vs. soft display; 3) incorporation of patient history data into the reading process; and 4) usefulness of image processing.

  20. An object-based image analysis approach for aquaculture ponds precise mapping and monitoring: a case study of Tam Giang-Cau Hai Lagoon, Vietnam.

    PubMed

    Virdis, Salvatore Gonario Pasquale

    2014-01-01

    Monitoring and mapping shrimp farms, including their impact on land cover and land use, is critical to the sustainable management and planning of coastal zones. In this work, a methodology was proposed to set up a cost-effective and reproducible procedure that made use of satellite remote sensing, object-based classification approach, and open-source software for mapping aquaculture areas with high planimetric and thematic accuracy between 2005 and 2008. The analysis focused on two characteristic areas of interest of the Tam Giang-Cau Hai Lagoon (in central Vietnam), which have similar farming systems to other coastal aquaculture worldwide: the first was primarily characterised by locally referred "low tide" shrimp ponds, which are partially submerged areas; the second by earthed shrimp ponds, locally referred to as "high tide" ponds, which are non-submerged areas on the lagoon coast. The approach was based on the region-growing segmentation of high- and very high-resolution panchromatic images, SPOT5 and Worldview-1, and the unsupervised clustering classifier ISOSEG embedded on SPRING non-commercial software. The results, the accuracy of which was tested with a field-based aquaculture inventory, showed that in favourable situations (high tide shrimp ponds), the classification results provided high rates of accuracy (>95 %) through a fully automatic object-based classification. In unfavourable situations (low tide shrimp ponds), the performance degraded due to the low contrast between the water and the pond embankments. In these situations, the automatic results were improved by manual delineation of the embankments. Worldview-1 necessarily showed better thematic accuracy, and precise maps have been realised at a scale of up to 1:2,000. However, SPOT5 provided comparable results in terms of number of correctly classified ponds, but less accurate results in terms of the precision of mapped features. The procedure also demonstrated high degrees of reproducibility because it was applied to images with different spatial resolutions in an area that, during the investigated period, did not experience significant land cover changes.

  1. EpiProfile Quantifies Histone Peptides With Modifications by Extracting Retention Time and Intensity in High-resolution Mass Spectra*

    PubMed Central

    Yuan, Zuo-Fei; Lin, Shu; Molden, Rosalynn C.; Cao, Xing-Jun; Bhanu, Natarajan V.; Wang, Xiaoshi; Sidoli, Simone; Liu, Shichong; Garcia, Benjamin A.

    2015-01-01

    Histone post-translational modifications contribute to chromatin function through their chemical properties which influence chromatin structure and their ability to recruit chromatin interacting proteins. Nanoflow liquid chromatography coupled with high resolution tandem mass spectrometry (nanoLC-MS/MS) has emerged as the most suitable technology for global histone modification analysis because of the high sensitivity and the high mass accuracy of this approach that provides confident identification. However, analysis of histones with this method is even more challenging because of the large number and variety of isobaric histone peptides and the high dynamic range of histone peptide abundances. Here, we introduce EpiProfile, a software tool that discriminates isobaric histone peptides using the distinguishing fragment ions in their tandem mass spectra and extracts the chromatographic area under the curve using previous knowledge about peptide retention time. The accuracy of EpiProfile was evaluated by analysis of mixtures containing different ratios of synthetic histone peptides. In addition to label-free quantification of histone peptides, EpiProfile is flexible and can quantify different types of isotopically labeled histone peptides. EpiProfile is unique in generating layouts (i.e. relative retention time) of histone peptides when compared with manual quantification of the data and other programs (such as Skyline), filling the need of an automatic and freely available tool to quantify labeled and non-labeled modified histone peptides. In summary, EpiProfile is a valuable nanoflow liquid chromatography coupled with high resolution tandem mass spectrometry-based quantification tool for histone peptides, which can also be adapted to analyze nonhistone protein samples. PMID:25805797

  2. Applicability of Various Interpolation Approaches for High Resolution Spatial Mapping of Climate Data in Korea

    NASA Astrophysics Data System (ADS)

    Jo, A.; Ryu, J.; Chung, H.; Choi, Y.; Jeon, S.

    2018-04-01

    The purpose of this study is to create a new dataset of spatially interpolated monthly climate data for South Korea at high spatial resolution (approximately 30m) by performing various spatio-statistical interpolation and comparing with forecast LDAPS gridded climate data provided from Korea Meterological Administration (KMA). Automatic Weather System (AWS) and Automated Synoptic Observing System (ASOS) data in 2017 obtained from KMA were included for the spatial mapping of temperature and rainfall; instantaneous temperature and 1-hour accumulated precipitation at 09:00 am on 31th March, 21th June, 23th September, and 24th December. Among observation data, 80 percent of the total point (478) and remaining 120 points were used for interpolations and for quantification, respectively. With the training data and digital elevation model (DEM) with 30 m resolution, inverse distance weighting (IDW), co-kriging, and kriging were performed by using ArcGIS10.3.1 software and Python 3.6.4. Bias and root mean square were computed to compare prediction performance quantitatively. When statistical analysis was performed for each cluster using 20 % validation data, co kriging was more suitable for spatialization of instantaneous temperature than other interpolation method. On the other hand, IDW technique was appropriate for spatialization of precipitation.

  3. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  4. Indicator Species Population Monitoring in Antarctica with Uav

    NASA Astrophysics Data System (ADS)

    Zmarz, A.; Korczak-Abshire, M.; Storvold, R.; Rodzewicz, M.; Kędzierska, I.

    2015-08-01

    A program to monitor bird and pinniped species in the vicinity of Arctowski Station, King George Island, South Shetlands, Antarctica, has been conducted over the past 38 years. Annual monitoring of these indicator species includes estimations of breeding population sizes of three Pygoscelis penguin species: Adélie, gentoo and chinstrap. Six penguin colonies situated on the western shores of two bays: Admiralty and King George are investigated. To study changes in penguin populations Unmanned Aerial Vehicles were used for the first time in the 2014/15 austral summer season. During photogrammetric flights the high-resolution images of eight penguin breeding colonies were taken. Obtained high resolution images were used for estimation of breeding population size and compared with the results of measurements taken at the same time from the ground. During this Antarctic expedition eight successful photogrammetry missions (total distance 1500 km) were performed. Images were taken with digital SLR Canon 700D, Nikon D5300, Nikon D5100 with a 35mm objective lens. Flights altitude at 350 - 400 AGL, allowed images to be taken with a resolution GSD (ground sample distance) less than 5 cm. The Image J software analysis method was tested to provide automatic population estimates from obtained images. The use of UAV for monitoring of indicator species, enabled data acquisition from areas inaccessible by ground methods.

  5. Multipurpose Hyperspectral Imaging System

    NASA Technical Reports Server (NTRS)

    Mao, Chengye; Smith, David; Lanoue, Mark A.; Poole, Gavin H.; Heitschmidt, Jerry; Martinez, Luis; Windham, William A.; Lawrence, Kurt C.; Park, Bosoon

    2005-01-01

    A hyperspectral imaging system of high spectral and spatial resolution that incorporates several innovative features has been developed to incorporate a focal plane scanner (U.S. Patent 6,166,373). This feature enables the system to be used for both airborne/spaceborne and laboratory hyperspectral imaging with or without relative movement of the imaging system, and it can be used to scan a target of any size as long as the target can be imaged at the focal plane; for example, automated inspection of food items and identification of single-celled organisms. The spectral resolution of this system is greater than that of prior terrestrial multispectral imaging systems. Moreover, unlike prior high-spectral resolution airborne and spaceborne hyperspectral imaging systems, this system does not rely on relative movement of the target and the imaging system to sweep an imaging line across a scene. This compact system (see figure) consists of a front objective mounted at a translation stage with a motorized actuator, and a line-slit imaging spectrograph mounted within a rotary assembly with a rear adaptor to a charged-coupled-device (CCD) camera. Push-broom scanning is carried out by the motorized actuator which can be controlled either manually by an operator or automatically by a computer to drive the line-slit across an image at a focal plane of the front objective. To reduce the cost, the system has been designed to integrate as many as possible off-the-shelf components including the CCD camera and spectrograph. The system has achieved high spectral and spatial resolutions by using a high-quality CCD camera, spectrograph, and front objective lens. Fixtures for attachment of the system to a microscope (U.S. Patent 6,495,818 B1) make it possible to acquire multispectral images of single cells and other microscopic objects.

  6. Application of the SRI cloud-tracking technique to rapid-scan GOES observations

    NASA Technical Reports Server (NTRS)

    Wolf, D. E.; Endlich, R. M.

    1980-01-01

    An automatic cloud tracking system was applied to multilayer clouds associated with severe storms. The method was tested using rapid scan observations of Hurricane Eloise obtained by the GOES satellite on 22 September 1975. Cloud tracking was performed using clustering based either on visible or infrared data. The clusters were tracked using two different techniques. The data of 4 km and 8 km resolution of the automatic system yielded comparable in accuracy and coverage to those obtained by NASA analysts using the Atmospheric and Oceanographic Information Processing System.

  7. Improving Science Communication with Responsive Web Design

    NASA Astrophysics Data System (ADS)

    Hilverda, M.

    2013-12-01

    Effective science communication requires clarity in both content and presentation. Content is increasingly being viewed via the Web across a broad range of devices, which can vary in screen size, resolution, and pixel density. Readers access the same content from desktop computers, tablets, smartphones, and wearable computing devices. Creating separate presentation formats optimized for each device is inefficient and unrealistic as new devices continually enter the marketplace. Responsive web design is an approach that puts content first within a presentation design that responds automatically to its environment. This allows for one platform to be maintained that can be used effectively for every screen. The layout adapts to screens of all sizes ensuring easy viewing of content for readers regardless of their device. Responsive design is accomplished primarily by the use of media queries within style sheets, which allows for changes to layout properties to be defined based on media types (i.e. screen, print) and resolution. Images and other types of multimedia can also be defined to scale automatically to fit different screen dimensions, although some media types require additional effort for proper implementation. Hardware changes, such as high pixel density screens, also present new challenges for effective presentation of content. High pixel density screens contain a greater number of pixels within a screen area increasing the pixels per inch (PPI) compared to standard screens. The result is increased clarity for text and vector media types, but often decreased clarity for standard resolution raster images. Media queries and other custom solutions can assist by specifying higher resolution images for high pixel density screens. Unfortunately, increasing image resolution results in significantly more data being transferred to the device. Web traffic on mobile devices such as smartphones and tablets is on a steady growth trajectory and many mobile devices around the world use low-bandwidth connections. Communicating science effectively includes efficient delivery of the information to the reader. To meet this criteria, responsive designs should also incorporate "mobile first" elements such as serving ideal image sizes (a low resolution cell phone does not need to receive a large desktop image) and a focus on fast, readable content delivery. The technical implementation of responsive web design is constantly changing as new web standards and approaches become available. However, fundamental design principles such as grid layouts, clear typography, and proper use of white space should be an important part of content delivery within any responsive design. This presentation will discuss current responsive design approaches for improving scientific communication across multiple devices, operating systems, and bandwidth capacities. The presentation will also include example responsive designs for scientific papers and websites. Implementing a responsive design approach with a focus on content and fundamental design principles is an important step to ensuring scientific information remains clear and accessible as screens and devices continue to evolve.

  8. A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: Application to adaptive segmentation of in vivo MRI.

    PubMed

    Iglesias, Juan Eugenio; Augustinack, Jean C; Nguyen, Khoa; Player, Christopher M; Player, Allison; Wright, Michelle; Roy, Nicole; Frosch, Matthew P; McKee, Ann C; Wald, Lawrence L; Fischl, Bruce; Van Leemput, Koen

    2015-07-15

    Automated analysis of MRI data of the subregions of the hippocampus requires computational atlases built at a higher resolution than those that are typically used in current neuroimaging studies. Here we describe the construction of a statistical atlas of the hippocampal formation at the subregion level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise delineations were made possible by the extraordinary resolution of the scans. In addition to the subregions, manual annotations for neighboring structures (e.g., amygdala, cortex) were obtained from a separate dataset of in vivo, T1-weighted MRI scans of the whole brain (1mm resolution). The manual labels from the in vivo and ex vivo data were combined into a single computational atlas of the hippocampal formation with a novel atlas building algorithm based on Bayesian inference. The resulting atlas can be used to automatically segment the hippocampal subregions in structural MRI images, using an algorithm that can analyze multimodal data and adapt to variations in MRI contrast due to differences in acquisition hardware or pulse sequences. The applicability of the atlas, which we are releasing as part of FreeSurfer (version 6.0), is demonstrated with experiments on three different publicly available datasets with different types of MRI contrast. The results show that the atlas and companion segmentation method: 1) can segment T1 and T2 images, as well as their combination, 2) replicate findings on mild cognitive impairment based on high-resolution T2 data, and 3) can discriminate between Alzheimer's disease subjects and elderly controls with 88% accuracy in standard resolution (1mm) T1 data, significantly outperforming the atlas in FreeSurfer version 5.3 (86% accuracy) and classification based on whole hippocampal volume (82% accuracy). Copyright © 2015. Published by Elsevier Inc.

  9. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  10. An open source automatic quality assurance (OSAQA) tool for the ACR MRI phantom.

    PubMed

    Sun, Jidi; Barnes, Michael; Dowling, Jason; Menk, Fred; Stanwell, Peter; Greer, Peter B

    2015-03-01

    Routine quality assurance (QA) is necessary and essential to ensure MR scanner performance. This includes geometric distortion, slice positioning and thickness accuracy, high contrast spatial resolution, intensity uniformity, ghosting artefact and low contrast object detectability. However, this manual process can be very time consuming. This paper describes the development and validation of an open source tool to automate the MR QA process, which aims to increase physicist efficiency, and improve the consistency of QA results by reducing human error. The OSAQA software was developed in Matlab and the source code is available for download from http://jidisun.wix.com/osaqa-project/. During program execution QA results are logged for immediate review and are also exported to a spreadsheet for long-term machine performance reporting. For the automatic contrast QA test, a user specific contrast evaluation was designed to improve accuracy for individuals on different display monitors. American College of Radiology QA images were acquired over a period of 2 months to compare manual QA and the results from the proposed OSAQA software. OSAQA was found to significantly reduce the QA time from approximately 45 to 2 min. Both the manual and OSAQA results were found to agree with regard to the recommended criteria and the differences were insignificant compared to the criteria. The intensity homogeneity filter is necessary to obtain an image with acceptable quality and at the same time keeps the high contrast spatial resolution within the recommended criterion. The OSAQA tool has been validated on scanners with different field strengths and manufacturers. A number of suggestions have been made to improve both the phantom design and QA protocol in the future.

  11. Analysis of open-pit mines using high-resolution topography from UAV

    NASA Astrophysics Data System (ADS)

    Chen, Jianping; Li, Ke; Sofia, Giulia; Tarolli, Paolo

    2015-04-01

    Among the anthropogenic topographic signatures on the Earth, open-pit mines deserve a great importance, since they significantly affect the Earth's surface and its related processes (e.g. erosion, pollution). Their geomorphological analysis, therefore, represents a real challenge for the Earth science community. The purpose of this research is to characterize the open-pit mining features using a recently published landscape metric, the Slope Local Length of Auto-Correlation (SLLAC) (Sofia et al., 2014), and high-resolution DEMs (Digital Elevation Models) derived from drone surveyed topography. The research focuses on two main case studies of iron mines located in the Beijing district (P.R. China). The main topographic information (Digital Surface Models, DSMs) was derived using Unmanned Aerial Vehicle (UAV) and the Structure from Motion (SfM) photogrammetric technique. The results underline the effectiveness of the adopted methodologies and survey techniques in the characterization of the main geomorphic features of the mines. Thanks to the SLLAC, the terraced area given by multi-benched sideways-moving method for the iron extraction is automatically depicted, and using some SLLAC derived parameters, the related terraces extent is automatically estimated. The analysis of the correlation length orientation, furthermore, allows to identify the terraces orientation respect to the North, and to understand as well the shape of the open-pit area. This provides a basis for a large scale and low cost topographic survey for a sustainable environmental planning and, for example, for the mitigation of environmental anthropogenic impact due to mining. References Sofia G., Marinello F, Tarolli P. 2014. A new landscape metric for the identification of terraced sites: the Slope Local Length of Auto-Correlation (SLLAC). ISPRS Journal of Photogrammetry and Remote Sensing, doi:10.1016/j.isprsjprs.2014.06.018

  12. Neuroanatomical Markers of Social Hierarchy Recognition in Humans: A Combined ERP/MRI Study.

    PubMed

    Santamaría-García, Hernando; Burgaleta, Miguel; Sebastián-Gallés, Nuria

    2015-07-29

    Social hierarchy is an ubiquitous principle of social organization across animal species. Although some progress has been made in our understanding of how humans infer hierarchical identity, the neuroanatomical basis for perceiving key social dimensions of others remains unexplored. Here, we combined event-related potentials and structural MRI to reveal the neuroanatomical substrates of early status recognition. We designed a covertly simulated hierarchical setting in which participants performed a task either with a superior or with an inferior player. Participants showed higher amplitude in the N170 component when presented with a picture of a superior player compared with an inferior player. Crucially, the magnitude of this effect correlated with brain morphology of the posterior cingulate cortex, superior temporal gyrus, insula, fusiform gyrus, and caudate nucleus. We conclude that early recognition of social hierarchies relies on the structural properties of a network involved in the automatic recognition of social identity. Humans can perceive social hierarchies very rapidly, an ability that is key for social interactions. However, some individuals are more sensitive to hierarchical information than others. Currently, it is unknown how brain structure supports such fast-paced processes of social hierarchy perception and their individual differences. Here, we addressed this issue for the first time by combining the high temporal resolution of event-related potentials (ERPs) and the high spatial resolution of structural MRI. This methodological approach allowed us to unveil a novel association between ERP neuromarkers of social hierarchy perception and the morphology of several cortical and subcortical brain regions typically assumed to play a role in automatic processes of social cognition. Our results are a step forward in our understanding of the human social brain. Copyright © 2015 the authors 0270-6474/15/3510843-08$15.00/0.

  13. Automatic determination of the artery vein ratio in retinal images

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; van Ginneken, Bram; Abràmoff, Michael D.

    2010-03-01

    A lower ratio between the width of the arteries and veins (Arteriolar-to-Venular diameter Ratio, AVR) on the retina, is well established to be predictive of stroke and other cardiovascular events in adults, as well as an increased risk of retinopathy of prematurity in premature infants. This work presents an automatic method that detects the location of the optic disc, determines the appropriate region of interest (ROI), classifies the vessels in the ROI into arteries and veins, measures their widths and calculates the AVR. After vessel segmentation and vessel width determination the optic disc is located and the system eliminates all vessels outside the AVR measurement ROI. The remaining vessels are thinned, vessel crossing and bifurcation points are removed leaving a set of vessel segments containing centerline pixels. Features are extracted from each centerline pixel that are used to assign them a soft label indicating the likelihood the pixel is part of a vein. As all centerline pixels in a connected segment should be the same type, the median soft label is assigned to each centerline pixel in the segment. Next artery vein pairs are matched using an iterative algorithm and the widths of the vessels is used to calculate the AVR. We train and test the algorithm using a set of 25 high resolution digital color fundus photographs a reference standard that indicates for the major vessels in the images whether they are an artery or a vein. We compared the AVR values produced by our system with those determined using a computer assisted method in 15 high resolution digital color fundus photographs and obtained a correlation coefficient of 0.881.

  14. A new method of inshore ship detection in high-resolution optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Hu, Qifeng; Du, Yaling; Jiang, Yunqiu; Ming, Delie

    2015-10-01

    Ship as an important military target and water transportation, of which the detection has great significance. In the military field, the automatic detection of ships can be used to monitor ship dynamic in the harbor and maritime of enemy, and then analyze the enemy naval power. In civilian field, the automatic detection of ships can be used in monitoring transportation of harbor and illegal behaviors such as illegal fishing, smuggling and pirates, etc. In recent years, research of ship detection is mainly concentrated in three categories: forward-looking infrared images, downward-looking SAR image, and optical remote sensing images with sea background. Little research has been done into ship detection of optical remote sensing images with harbor background, as the gray-scale and texture features of ships are similar to the coast in high-resolution optical remote sensing images. In this paper, we put forward an effective harbor ship target detection method. First of all, in order to overcome the shortage of the traditional difference method in obtaining histogram valley as the segmentation threshold, we propose an iterative histogram valley segmentation method which separates the harbor and ships from the water quite well. Secondly, as landing ships in optical remote sensing images usually lead to discontinuous harbor edges, we use Hough Transform method to extract harbor edges. First, lines are detected by Hough Transform. Then, lines that have similar slope are connected into a new line, thus we access continuous harbor edges. Secondary segmentation on the result of the land-and-sea separation, we eventually get the ships. At last, we calculate the aspect ratio of the ROIs, thereby remove those targets which are not ship. The experiment results show that our method has good robustness and can tolerate a certain degree of noise and occlusion.

  15. The influence of action observation on action execution: Dissociating the contribution of action on perception, perception on action, and resolving conflict.

    PubMed

    Deschrijver, Eliane; Wiersema, Jan R; Brass, Marcel

    2017-04-01

    For more than 15 years, motor interference paradigms have been used to investigate the influence of action observation on action execution. Most research on so-called automatic imitation has focused on variables that play a modulating role or investigated potential confounding factors. Interestingly, furthermore, a number of functional magnetic resonance imaging (fMRI) studies have tried to shed light on the functional mechanisms and neural correlates involved in imitation inhibition. However, these fMRI studies, presumably due to poor temporal resolution, have primarily focused on high-level processes and have neglected the potential role of low-level motor and perceptual processes. In the current EEG study, we therefore aimed to disentangle the influence of low-level perceptual and motoric mechanisms from high-level cognitive mechanisms. We focused on potential congruency differences in the visual N190 - a component related to the processing of biological motion, the Readiness Potential - a component related to motor preparation, and the high-level P3 component. Interestingly, we detected congruency effects in each of these components, suggesting that the interference effect in an automatic imitation paradigm is not only related to high-level processes such as self-other distinction but also to more low-level influences of perception on action and action on perception. Moreover, we documented relationships of the neural effects with (autistic) behavior.

  16. An automatic procedure for high-resolution earthquake locations: a case study from the TABOO near fault observatory (Northern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Valoroso, Luisa; Chiaraluce, Lauro; Di Stefano, Raffaele; Latorre, Diana; Piccinini, Davide

    2014-05-01

    The characterization of the geometry, kinematics and rheology of fault zones by seismological data depends on our capability of accurately locate the largest number of low-magnitude seismic events. To this aim, we have been working for the past three years to develop an advanced modular earthquake location procedure able to automatically retrieve high-resolution earthquakes catalogues directly from continuous waveforms data. We use seismograms recorded at about 60 seismic stations located both at surface and at depth. The network covers an area of about 80x60 km with a mean inter-station distance of 6 km. These stations are part of a Near fault Observatory (TABOO; http://taboo.rm.ingv.it/), consisting of multi-sensor stations (seismic, geodetic, geochemical and electromagnetic). This permanent scientific infrastructure managed by the INGV is devoted to studying the earthquakes preparatory phase and the fast/slow (i.e., seismic/aseismic) deformation process active along the Alto Tiberina fault (ATF) located in the northern Apennines (Italy). The ATF is potentially one of the rare worldwide examples of active low-angle (< 15°) normal fault accommodating crustal extension and characterized by a regular occurrence of micro-earthquakes. The modular procedure combines: i) a sensitive detection algorithm optimized to declare low-magnitude events; ii) an accurate picking procedure that provides consistently weighted P- and S-wave arrival times, P-wave first motion polarities and the maximum waveform amplitude for local magnitude calculation; iii) both linearized iterative and non-linear global-search earthquake location algorithms to compute accurate absolute locations of single-events in a 3D geological model (see Latorre et al. same session); iv) cross-correlation and double-difference location methods to compute high-resolution relative event locations. This procedure is now running off-line with a delay of 1 week to the real-time. We are now implementing this procedure to obtain high-resolution double-difference earthquake locations in real-time (DDRT). We show locations of ~30k low-magnitude earthquakes recorded during the past 4 years (2010-2013) of network operation, reaching a completeness magnitude of the catalogue of 0.2. The spatiotemporal seismicity distribution has an almost constant and high rate of r = 24.30e-04 eqks/day*km2, interrupted by low to moderate magnitude seismic sequences such as the 2010 Pietralunga sequence (M L 3.8) and the still ongoing 2013 Gubbio sequence (M L 4.0 on 22nd December 2013). Low-magnitude seismicity images the fine scale geometry of the ATF: an E-dipping plane at low angle (15°) from 4 km down to ~15 km of depth. While in the ATF hanging-wall we observe the activation of high-angle minor synthetic and antithetic normal faults (4-5 km long) confined at depth by the detachment. Both seismic sequences activated up to now only these high-angle fault segments.

  17. The Automatic Measuring Machines and Ground-Based Astrometry

    NASA Astrophysics Data System (ADS)

    Sergeeva, T. P.

    The introduction of the automatic measuring machines into the astronomical investigations a little more then a quarter of the century ago has increased essentially the range and the scale of projects which the astronomers could capable to realize since then. During that time, there have been dozens photographic sky surveys, which have covered all of the sky more then once. Due to high accuracy and speed of automatic measuring machines the photographic astrometry has obtained the opportunity to create the high precision catalogs such as CpC2. Investigations of the structure and kinematics of the stellar components of our Galaxy has been revolutionized in the last decade by the advent of automated plate measuring machines. But in an age of rapidly evolving electronic detectors and space-based catalogs, expected soon, one could think that the twilight hours of astronomical photography have become. On opposite of that point of view such astronomers as D.Monet (U.S.N.O.), L.G.Taff (STScI), M.K.Tsvetkov (IA BAS) and some other have contended the several ways of the photographic astronomy evolution. One of them sounds as: "...special efforts must be taken to extract useful information from the photographic archives before the plates degrade and the technology required to measure them disappears". Another is the minimization of the systematic errors of ground-based star catalogs by employment of certain reduction technology and a dense enough and precise space-based star reference catalogs. In addition to that the using of the higher resolution and quantum efficiency emulsions such as Tech Pan and some of the new methods of processing of the digitized information hold great promise for future deep (B<25) surveys (Bland-Hawthorn et al. 1993, AJ, 106, 2154). Thus not only the hard working of all existing automatic measuring machines is apparently needed but the designing, development and employment of a new generation of portable, mobile scanners is very necessary. The classification, main parameters of some modern automatic measuring machines, developed with them scientific researches and some of the used methods of high accuracy, reliability and certainly ensuring are reported in that paper. This work are supported by Grant N U4I000 from International Science Foundation.

  18. 3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse

    PubMed Central

    Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.

    2009-01-01

    We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166

  19. Defense applications of the CAVE (CAVE automatic virtual environment)

    NASA Astrophysics Data System (ADS)

    Isabelle, Scott K.; Gilkey, Robert H.; Kenyon, Robert V.; Valentino, George; Flach, John M.; Spenny, Curtis H.; Anderson, Timothy R.

    1997-07-01

    The CAVE is a multi-person, room-sized, high-resolution, 3D video and auditory environment, which can be used to present very immersive virtual environment experiences. This paper describes the CAVE technology and the capability of the CAVE system as originally developed at the Electronics Visualization Laboratory of the University of Illinois- Chicago and as more recently implemented by Wright State University (WSU) in the Armstrong Laboratory at Wright- Patterson Air Force Base (WPAFB). One planned use of the WSU/WPAFB CAVE is research addressing the appropriate design of display and control interfaces for controlling uninhabited aerial vehicles. The WSU/WPAFB CAVE has a number of features that make it well-suited to this work: (1) 360 degrees surround, plus floor, high resolution visual displays, (2) virtual spatialized audio, (3) the ability to integrate real and virtual objects, and (4) rapid and flexible reconfiguration. However, even though the CAVE is likely to have broad utility for military applications, it does have certain limitations that may make it less well- suited to applications that require 'natural' haptic feedback, vestibular stimulation, or an ability to interact with close detailed objects.

  20. Processing Ocean Images to Detect Large Drift Nets

    NASA Technical Reports Server (NTRS)

    Veenstra, Tim

    2009-01-01

    A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.

  1. Quantifying Contributions of the Cricopharyngeus to Upper Esophageal Sphincter Pressure Changes by Means of Intramuscular Electromyography and High-Resolution Manometry

    PubMed Central

    Jones, Corinne A.; Hammer, Michael J.; Hoffman, Matthew R.; McCulloch, Timothy M.

    2014-01-01

    Objectives We sought to determine whether the association between cricopharyngeus muscle activity and upper esophageal sphincter pressure may change in a task-dependent fashion. We hypothesized that more automated tasks related to swallow or airway protection would yield a stronger association than would more volitional tasks related to tidal breathing or voice production. Methods Six healthy adult subjects underwent simultaneous intramuscular electromyography of the cricopharyngeus muscle and high-resolution manometry of the upper esophageal sphincter. Correlation coefficients were calculated to characterize the association between the time-linked series. Results Cricopharyngeus muscle activity was most strongly associated with upper esophageal sphincter pressure during swallow and effortful exhalation tasks (r = 0.77 and 0.79, respectively; P < .01). The association was also less variable during swallow and effortful exhalation. Conclusions These findings suggest a greater coupling for the more automatic tasks, and may suggest less coupling and more flexibility for the more volitional, voice-related tasks. These findings support the important role of central patterning for respiratory- and swallow-related tasks. PMID:24633943

  2. Sugarcane Crop Extraction Using Object-Oriented Method from ZY-3 High Resolution Satellite Tlc Image

    NASA Astrophysics Data System (ADS)

    Luo, H.; Ling, Z. Y.; Shao, G. Z.; Huang, Y.; He, Y. Q.; Ning, W. Y.; Zhong, Z.

    2018-04-01

    Sugarcane is one of the most important crops in Guangxi, China. As the development of satellite remote sensing technology, more remotely sensed images can be used for monitoring sugarcane crop. With the help of Three Line Camera (TLC) images, wide coverage and stereoscopic mapping ability, Chinese ZY-3 high resolution stereoscopic mapping satellite is useful in attaining more information for sugarcane crop monitoring, such as spectral, shape, texture difference between forward, nadir and backward images. Digital surface model (DSM) derived from ZY-3 TLC images are also able to provide height information for sugarcane crop. In this study, we make attempt to extract sugarcane crop from ZY-3 images, which are acquired in harvest period. Ortho-rectified TLC images, fused image, DSM are processed for our extraction. Then Object-oriented method is used in image segmentation, example collection, and feature extraction. The results of our study show that with the help of ZY-3 TLC image, the information of sugarcane crop in harvest time can be automatic extracted, with an overall accuracy of about 85.3 %.

  3. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  4. Prospective motion correction of high-resolution magnetic resonance imaging data in children.

    PubMed

    Brown, Timothy T; Kuperman, Joshua M; Erhart, Matthew; White, Nathan S; Roddey, J Cooper; Shankaranarayanan, Ajit; Han, Eric T; Rettmann, Dan; Dale, Anders M

    2010-10-15

    Motion artifacts pose significant problems for the acquisition and analysis of high-resolution magnetic resonance imaging data. These artifacts can be particularly severe when studying pediatric populations, where greater patient movement reduces the ability to clearly view and reliably measure anatomy. In this study, we tested the effectiveness of a new prospective motion correction technique, called PROMO, as applied to making neuroanatomical measures in typically developing school-age children. This method attempts to address the problem of motion at its source by keeping the measurement coordinate system fixed with respect to the subject throughout image acquisition. The technique also performs automatic rescanning of images that were acquired during intervals of particularly severe motion. Unlike many previous techniques, this approach adjusts for both in-plane and through-plane movement, greatly reducing image artifacts without the need for additional equipment. Results show that the use of PROMO notably enhances subjective image quality, reduces errors in Freesurfer cortical surface reconstructions, and significantly improves the subcortical volumetric segmentation of brain structures. Further applications of PROMO for clinical and cognitive neuroscience are discussed. Copyright 2010 Elsevier Inc. All rights reserved.

  5. The analysis of selected orientation methods of architectural objects' scans

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub S.; Kajdewicz, Irmina; Zawieska, Dorota

    2015-05-01

    The terrestrial laser scanning is commonly used in different areas, inter alia in modelling architectural objects. One of the most important part of TLS data processing is scans registration. It significantly affects the accuracy of generation of high resolution photogrammetric documentation. This process is time consuming, especially in case of a large number of scans. It is mostly based on an automatic detection and a semi-automatic measurement of control points placed on the object. In case of the complicated historical buildings, sometimes it is forbidden to place survey targets on an object or it may be difficult to distribute survey targets in the optimal way. Such problems encourage the search for the new methods of scan registration which enable to eliminate the step of placing survey targets on the object. In this paper the results of target-based registration method are presented The survey targets placed on the walls of historical chambers of the Museum of King Jan III's Palace at Wilanów and on the walls of ruins of the Bishops Castle in Iłża were used for scan orientation. Several variants of orientation were performed, taking into account different placement and different number of survey marks. Afterwards, during next research works, raster images were generated from scans and the SIFT and SURF algorithms for image processing were used to automatically search for corresponding natural points. The case of utilisation of automatically identified points for TLS data orientation was analysed. The results of both methods for TLS data registration were summarized and presented in numerical and graphical forms.

  6. Automatic Mosaicking of Satellite Imagery Considering the Clouds

    NASA Astrophysics Data System (ADS)

    Kang, Yifei; Pan, Li; Chen, Qi; Zhang, Tong; Zhang, Shasha; Liu, Zhang

    2016-06-01

    With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.

  7. 3D models mapping optimization through an integrated parameterization approach: cases studies from Ravenna

    NASA Astrophysics Data System (ADS)

    Cipriani, L.; Fantini, F.; Bertacchi, S.

    2014-06-01

    Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.

  8. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images.

    PubMed

    Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F

    2007-01-01

    Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.

  9. An automatic approach for 3D registration of CT scans

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Saber, Eli; Dianat, Sohail; Vantaram, Sreenath Rao; Abhyankar, Vishwas

    2012-03-01

    CT (Computed tomography) is a widely employed imaging modality in the medical field. Normally, a volume of CT scans is prescribed by a doctor when a specific region of the body (typically neck to groin) is suspected of being abnormal. The doctors are required to make professional diagnoses based upon the obtained datasets. In this paper, we propose an automatic registration algorithm that helps healthcare personnel to automatically align corresponding scans from 'Study' to 'Atlas'. The proposed algorithm is capable of aligning both 'Atlas' and 'Study' into the same resolution through 3D interpolation. After retrieving the scanned slice volume in the 'Study' and the corresponding volume in the original 'Atlas' dataset, a 3D cross correlation method is used to identify and register various body parts.

  10. Method and apparatus for telemetry adaptive bandwidth compression

    NASA Technical Reports Server (NTRS)

    Graham, Olin L.

    1987-01-01

    Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.

  11. A research of road centerline extraction algorithm from high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yushan; Xu, Tingfa

    2017-09-01

    Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.

  12. FIB/SEM technology and high-throughput 3D reconstruction of dendritic spines and synapses in GFP-labeled adult-generated neurons.

    PubMed

    Bosch, Carles; Martínez, Albert; Masachs, Nuria; Teixeira, Cátia M; Fernaud, Isabel; Ulloa, Fausto; Pérez-Martínez, Esther; Lois, Carlos; Comella, Joan X; DeFelipe, Javier; Merchán-Pérez, Angel; Soriano, Eduardo

    2015-01-01

    The fine analysis of synaptic contacts is usually performed using transmission electron microscopy (TEM) and its combination with neuronal labeling techniques. However, the complex 3D architecture of neuronal samples calls for their reconstruction from serial sections. Here we show that focused ion beam/scanning electron microscopy (FIB/SEM) allows efficient, complete, and automatic 3D reconstruction of identified dendrites, including their spines and synapses, from GFP/DAB-labeled neurons, with a resolution comparable to that of TEM. We applied this technology to analyze the synaptogenesis of labeled adult-generated granule cells (GCs) in mice. 3D reconstruction of dendritic spines in GCs aged 3-4 and 8-9 weeks revealed two different stages of dendritic spine development and unexpected features of synapse formation, including vacant and branched dendritic spines and presynaptic terminals establishing synapses with up to 10 dendritic spines. Given the reliability, efficiency, and high resolution of FIB/SEM technology and the wide use of DAB in conventional EM, we consider FIB/SEM fundamental for the detailed characterization of identified synaptic contacts in neurons in a high-throughput manner.

  13. FIB/SEM technology and high-throughput 3D reconstruction of dendritic spines and synapses in GFP-labeled adult-generated neurons

    PubMed Central

    Bosch, Carles; Martínez, Albert; Masachs, Nuria; Teixeira, Cátia M.; Fernaud, Isabel; Ulloa, Fausto; Pérez-Martínez, Esther; Lois, Carlos; Comella, Joan X.; DeFelipe, Javier; Merchán-Pérez, Angel; Soriano, Eduardo

    2015-01-01

    The fine analysis of synaptic contacts is usually performed using transmission electron microscopy (TEM) and its combination with neuronal labeling techniques. However, the complex 3D architecture of neuronal samples calls for their reconstruction from serial sections. Here we show that focused ion beam/scanning electron microscopy (FIB/SEM) allows efficient, complete, and automatic 3D reconstruction of identified dendrites, including their spines and synapses, from GFP/DAB-labeled neurons, with a resolution comparable to that of TEM. We applied this technology to analyze the synaptogenesis of labeled adult-generated granule cells (GCs) in mice. 3D reconstruction of dendritic spines in GCs aged 3–4 and 8–9 weeks revealed two different stages of dendritic spine development and unexpected features of synapse formation, including vacant and branched dendritic spines and presynaptic terminals establishing synapses with up to 10 dendritic spines. Given the reliability, efficiency, and high resolution of FIB/SEM technology and the wide use of DAB in conventional EM, we consider FIB/SEM fundamental for the detailed characterization of identified synaptic contacts in neurons in a high-throughput manner. PMID:26052271

  14. High-resolution motion-compensated imaging photoplethysmography for remote heart rate monitoring

    NASA Astrophysics Data System (ADS)

    Chung, Audrey; Wang, Xiao Yu; Amelard, Robert; Scharfenberger, Christian; Leong, Joanne; Kulinski, Jan; Wong, Alexander; Clausi, David A.

    2015-03-01

    We present a novel non-contact photoplethysmographic (PPG) imaging system based on high-resolution video recordings of ambient reflectance of human bodies that compensates for body motion and takes advantage of skin erythema fluctuations to improve measurement reliability for the purpose of remote heart rate monitoring. A single measurement location for recording the ambient reflectance is automatically identified on an individual, and the motion for the location is determined over time via measurement location tracking. Based on the determined motion information motion-compensated reflectance measurements at different wavelengths for the measurement location can be acquired, thus providing more reliable measurements for the same location on the human over time. The reflectance measurement is used to determine skin erythema fluctuations over time, resulting in the capture of a PPG signal with a high signal-to-noise ratio. To test the efficacy of the proposed system, a set of experiments involving human motion in a front-facing position were performed under natural ambient light. The experimental results demonstrated that skin erythema fluctuations can achieve noticeably improved average accuracy in heart rate measurement when compared to previously proposed non-contact PPG imaging systems.

  15. Modis, SeaWIFS, and Pathfinder funded activities

    NASA Technical Reports Server (NTRS)

    Evans, Robert H.

    1995-01-01

    MODIS (Moderate Resolution Imaging Spectrometer), SeaWIFS (Sea-viewing Wide Field Sensor), Pathfinder, and DSP (Digital Signal Processor) objectives are summarized. An overview of current progress is given for the automatic processing database, client/server status, matchup database, and DSP support.

  16. Individual differences in automatic emotion regulation affect the asymmetry of the LPP component.

    PubMed

    Zhang, Jing; Zhou, Renlai

    2014-01-01

    The main goal of this study was to investigate how automatic emotion regulation altered the hemispheric asymmetry of ERPs elicited by emotion processing. We examined the effect of individual differences in automatic emotion regulation on the late positive potential (LPP) when participants were viewing blocks of positive high arousal, positive low arousal, negative high arousal and negative low arousal pictures from International affect picture system (IAPS). Two participant groups were categorized by the Emotion Regulation-Implicit Association Test which has been used in previous research to identify two groups of participants with automatic emotion control and with automatic emotion express. The main finding was that automatic emotion express group showed a right dominance of the LPP component at posterior electrodes, especially in high arousal conditions. But no right dominance of the LPP component was observed for automatic emotion control group. We also found the group with automatic emotion control showed no differences in the right posterior LPP amplitude between high- and low-arousal emotion conditions, while the participants with automatic emotion express showed larger LPP amplitude in the right posterior in high-arousal conditions compared to low-arousal conditions. This result suggested that AER (Automatic emotion regulation) modulated the hemispheric asymmetry of LPP on posterior electrodes and supported the right hemisphere hypothesis.

  17. Paediatric interventional cardiology: flat detector versus image intensifier using a test object

    NASA Astrophysics Data System (ADS)

    Vano, E.; Ubeda, C.; Martinez, L. C.; Leyton, F.; Miranda, P.

    2010-12-01

    Entrance surface air kerma (ESAK) values and image quality parameters were measured and compared for two biplane angiography x-ray systems dedicated to paediatric interventional cardiology, one equipped with image intensifiers (II) and the other one with dynamic flat detectors (FDs). Polymethyl methacrylate phantoms of different thicknesses, ranging from 8 to 16 cm, and a Leeds TOR 18-FG test object were used. The parameters of the image quality evaluated were noise, signal-difference-to-noise ratio (SdNR), high contrast spatial resolution (HCSR) and three figures of merit combining entrance doses and signal-to-noise ratios or HCSR. The comparisons showed a better behaviour of the II-based system in the low contrast region over the whole interval of thicknesses. The FD-based system showed a better performance in HCSR. The FD system evaluated would need around two times more dose than the II system evaluated to reach a given value of SdNR; moreover, a better spatial resolution was measured (and perceived in conventional monitors) for the system equipped with flat detectors. According to the results of this paper, the use of dynamic FD systems does not lead to an automatic reduction in ESAK or to an automatic improvement in image quality by comparison with II systems. Any improvement also depends on the setting of the x-ray systems and it should still be possible to refine these settings for some of the dynamic FDs used in paediatric cardiology.

  18. Online model evaluation of large-eddy simulations covering Germany with a horizontal resolution of 156 m

    NASA Astrophysics Data System (ADS)

    Hansen, Akio; Ament, Felix; Lammert, Andrea

    2017-04-01

    Large-eddy simulations have been performed since several decades, but due to computational limits most studies were restricted to small domains or idealised initial-/boundary conditions. Within the High definition clouds and precipitation for advancing climate prediction (HD(CP)2) project realistic weather forecasting like LES simulations were performed with the newly developed ICON LES model for several days. The domain covers central Europe with a horizontal resolution down to 156 m. The setup consists of more than 3 billion grid cells, by what one 3D dump requires roughly 500 GB. A newly developed online evaluation toolbox was created to check instantaneously for realistic model simulations. The toolbox automatically combines model results with observations and generates several quicklooks for various variables. So far temperature-/humidity profiles, cloud cover, integrated water vapour, precipitation and many more are included. All kind of observations like aircraft observations, soundings or precipitation radar networks are used. For each dataset, a specific module is created, which allows for an easy handling and enhancement of the toolbox. Most of the observations are automatically downloaded from the Standardized Atmospheric Measurement Database (SAMD). The evaluation tool should support scientists at monitoring computational costly model simulations as well as to give a first overview about model's performance. The structure of the toolbox as well as the SAMD database are presented. Furthermore, the toolbox was applied on an ICON LES sensitivity study, where example results are shown.

  19. Real-time myocardium segmentation for the assessment of cardiac function variation

    NASA Astrophysics Data System (ADS)

    Zoehrer, Fabian; Huellebrand, Markus; Chitiboi, Teodora; Oechtering, Thekla; Sieren, Malte; Frahm, Jens; Hahn, Horst K.; Hennemuth, Anja

    2017-03-01

    Recent developments in MRI enable the acquisition of image sequences with high spatio-temporal resolution. Cardiac motion can be captured without gating and triggering. Image size and contrast relations differ from conventional cardiac MRI cine sequences requiring new adapted analysis methods. We suggest a novel segmentation approach utilizing contrast invariant polar scanning techniques. It has been tested with 20 datasets of arrhythmia patients. The results do not differ significantly more between automatic and manual segmentations than between observers. This indicates that the presented solution could enable clinical applications of real-time MRI for the examination of arrhythmic cardiac motion in the future.

  20. Research on intelligent scenic security early warning platform based on high resolution image: real scene linkage and real-time LBS

    NASA Astrophysics Data System (ADS)

    Li, Baishou; Huang, Yu; Lan, Guangquan; Li, Tingting; Lu, Ting; Yao, Mingxing; Luo, Yuandan; Li, Boxiang; Qian, Yongyou; Gao, Yujiu

    2015-12-01

    This paper design and implement security monitor system within a scenic spot for tourists, the scenic spot staff can be automatic real time for visitors to perception and monitoring, and visitors can also know about themselves location in the scenic, real-time and obtain the 3D imaging conditions of scenic area. Through early warning can realize "parent-child relation", preventing the old man and child lost and wandering. Research results to the further development of virtual reality to provide effective security early warning platform of the theoretical basis and practical reference.

  1. Automated seamline detection along skeleton for remote sensing image mosaicking

    NASA Astrophysics Data System (ADS)

    Zhang, Hansong; Chen, Jianyu; Liu, Xin

    2015-08-01

    The automatic generation of seamline along the overlap region skeleton is a concerning problem for the mosaicking of Remote Sensing(RS) images. Along with the improvement of RS image resolution, it is necessary to ensure rapid and accurate processing under complex conditions. So an automated seamline detection method for RS image mosaicking based on image object and overlap region contour contraction is introduced. By this means we can ensure universality and efficiency of mosaicking. The experiments also show that this method can select seamline of RS images with great speed and high accuracy over arbitrary overlap regions, and realize RS image rapid mosaicking in surveying and mapping production.

  2. Properties of O dwarf stars in 30 Doradus

    NASA Astrophysics Data System (ADS)

    Sabín-Sanjulián, Carolina; VFTS Collaboration

    2017-11-01

    We perform a quantitative spectroscopic analysis of 105 presumably single O dwarf stars in 30 Doradus, located within the Large Magellanic Cloud. We use mid-to-high resolution multi-epoch optical spectroscopic data obtained within the VLT-FLAMES Tarantula Survey. Stellar and wind parameters are derived by means of the automatic tool iacob-gbat, which is based on a large grid of fastwind models. We also benefit from the Bayesian tool bonnsai to estimate evolutionary masses. We provide a spectral calibration for the effective temperature of O dwarf stars in the LMC, deal with the mass discrepancy problem and investigate the wind properties of the sample.

  3. Automatic optical inspection of regular grid patterns with an inspection camera used below the Shannon-Nyquist criterion for optical resolution

    NASA Astrophysics Data System (ADS)

    Ferreira, Flávio P.; Forte, Paulo M. F.; Felgueiras, Paulo E. R.; Bret, Boris P. J.; Belsley, Michael S.; Nunes-Pereira, Eduardo J.

    2017-02-01

    An Automatic Optical Inspection (AOI) system for optical inspection of imaging devices used in automotive industry using an inspecting optics of lower spatial resolution than the device under inspection is described. This system is robust and with no moving parts. The cycle time is small. Its main advantage is that it is capable of detecting and quantifying defects in regular patterns, working below the Shannon-Nyquist criterion for optical resolution, using a single low resolution image sensor. It is easily scalable, which is an important advantage in industrial applications, since the same inspecting sensor can be reused for increasingly higher spatial resolutions of the devices to be inspected. The optical inspection is implemented with a notch multi-band Fourier filter, making the procedure especially fitted for regular patterns, like the ones that can be produced in image displays and Head Up Displays (HUDs). The regular patterns are used in production line only, for inspection purposes. For image displays, functional defects are detected at the level of a sub-image display grid element unit. Functional defects are the ones impairing the function of the display, and are preferred in AOI to the direct geometric imaging, since those are the ones directly related with the end-user experience. The shift in emphasis from geometric imaging to functional imaging is critical, since it is this that allows quantitative inspection, below Shannon-Nyquist. For HUDs, the functional detect detection addresses defects resulting from the combined effect of the image display and the image forming optics.

  4. Evolution of digital angiography systems.

    PubMed

    Brigida, Raffaela; Misciasci, Teresa; Martarelli, Fabiola; Gangitano, Guido; Ottaviani, Pierfrancesco; Rollo, Massimo; Marano, Pasquale

    2003-01-01

    The innovations introduced by digital subtraction angiography in digital radiography are briefly illustrated with the description of its components and functioning. The pros and cons of digital subtraction angiography are analyzed in light of present and future imaging technologies. In particular, among advantages there are: automatic exposure, digital image subtraction, digital post-processing, high number of images per second, possible changes in density and contrast. Among disadvantages there are: small round field of view, geometric distortion at the image periphery, high sensitivity to patient movements, not very high spatial resolution. At present, flat panel detectors represent the most suitable substitutes for digital subtraction angiography, with the introduction of novel solutions for those artifacts which for years have hindered its diagnostic validity. The concept of temporal artifact, reset light and possible future evolutions of this technology that may afford both diagnostic and protectionist advantages, are analyzed.

  5. An empirical strategy to detect bacterial transcript structure from directional RNA-seq transcriptome data.

    PubMed

    Wang, Yejun; MacKenzie, Keith D; White, Aaron P

    2015-05-07

    As sequencing costs are being lowered continuously, RNA-seq has gradually been adopted as the first choice for comparative transcriptome studies with bacteria. Unlike microarrays, RNA-seq can directly detect cDNA derived from mRNA transcripts at a single nucleotide resolution. Not only does this allow researchers to determine the absolute expression level of genes, but it also conveys information about transcript structure. Few automatic software tools have yet been established to investigate large-scale RNA-seq data for bacterial transcript structure analysis. In this study, 54 directional RNA-seq libraries from Salmonella serovar Typhimurium (S. Typhimurium) 14028s were examined for potential relationships between read mapping patterns and transcript structure. We developed an empirical method, combined with statistical tests, to automatically detect key transcript features, including transcriptional start sites (TSSs), transcriptional termination sites (TTSs) and operon organization. Using our method, we obtained 2,764 TSSs and 1,467 TTSs for 1331 and 844 different genes, respectively. Identification of TSSs facilitated further discrimination of 215 putative sigma 38 regulons and 863 potential sigma 70 regulons. Combining the TSSs and TTSs with intergenic distance and co-expression information, we comprehensively annotated the operon organization in S. Typhimurium 14028s. Our results show that directional RNA-seq can be used to detect transcriptional borders at an acceptable resolution of ±10-20 nucleotides. Technical limitations of the RNA-seq procedure may prevent single nucleotide resolution. The automatic transcript border detection methods, statistical models and operon organization pipeline that we have described could be widely applied to RNA-seq studies in other bacteria. Furthermore, the TSSs, TTSs, operons, promoters and unstranslated regions that we have defined for S. Typhimurium 14028s may constitute valuable resources that can be used for comparative analyses with other Salmonella serotypes.

  6. Cloud Detection from Satellite Imagery: A Comparison of Expert-Generated and Automatically-Generated Decision Trees

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar

    2004-01-01

    Automated cloud detection and tracking is an important step in assessing global climate change via remote sensing. Cloud masks, which indicate whether individual pixels depict clouds, are included in many of the data products that are based on data acquired on- board earth satellites. Many cloud-mask algorithms have the form of decision trees, which employ sequential tests that scientists designed based on empirical astrophysics studies and astrophysics simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In this study we explored the potential benefits of automatically-learned decision trees for detecting clouds from images acquired using the Advanced Very High Resolution Radiometer (AVHRR) instrument on board the NOAA-14 weather satellite of the National Oceanic and Atmospheric Administration. We constructed three decision trees for a sample of 8km-daily AVHRR data from 2000 using a decision-tree learning procedure provided within MATLAB(R), and compared the accuracy of the decision trees to the accuracy of the cloud mask. We used ground observations collected by the National Aeronautics and Space Administration Clouds and the Earth s Radiant Energy Systems S COOL project as the gold standard. For the sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks included in the AVHRR data product.

  7. An Investigation of Automatic Change Detection for Topographic Map Updating

    NASA Astrophysics Data System (ADS)

    Duncan, P.; Smit, J.

    2012-08-01

    Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.

  8. An Interactive Program on Digitizing Historical Seismograms

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Xu, T.

    2013-12-01

    Retrieving information from historical seismograms is of great importance since they are considered the unique sources that provide quantitative information of historical earthquakes. Modern techniques of seismology require digital forms of seismograms that are essentially a sequence of time-amplitude pairs. However, the historical seismograms, after scanned into computers, are two dimensional arrays. Each element of the arrays contains the grayscale value or RGB value of the corresponding pixel. The problem of digitizing historical seismograms, referred to as converting historical seismograms to digital seismograms, can be formulated as an inverse problem that generating sequences of time-amplitude pairs from a two dimension arrays. This problem has infinite solutions. The algorithm for automatic digitization of historical seismogram presented considers several features of seismograms, including continuity, smoothness of the seismic traces as the prior information, and assumes that the amplitude is a single-valued function of time. An interactive program based on the algorithm is also presented. The program is developed using Matlab GUI and has both automatic and manual modality digitization. Users can easily switch between them, and try different combinations to get the optimal results. Several examples are given to illustrate the results of digitizing seismograms using the program, including a photographic record and a wide-angle reflection/refraction seismogram. Digitized result of the program (redrawn using Golden Software Surfer for high resolution image). (a) shows the result of automatic digitization, and (b) is the result after manual correction.

  9. Automatic characterization of neointimal tissue by intravascular optical coherence tomography.

    PubMed

    Ughi, Giovanni J; Steigerwald, Kristin; Adriaenssens, Tom; Desmet, Walter; Guagliumi, Giulio; Joner, Michael; D'hooge, Jan

    2014-02-01

    Intravascular optical coherence tomography (IVOCT) is rapidly becoming the method of choice for assessing vessel healing after stent implantation due to its unique axial resolution <20  μm. The amount of neointimal coverage is an important parameter. In addition, the characterization of neointimal tissue maturity is also of importance for an accurate analysis, especially in the case of drug-eluting and bioresorbable stent devices. Previous studies indicated that well-organized mature neointimal tissue appears as a high-intensity, smooth, and homogeneous region in IVOCT images, while lower-intensity signal areas might correspond to immature tissue mainly composed of acellular material. A new method for automatic neointimal tissue characterization, based on statistical texture analysis and a supervised classification technique, is presented. Algorithm training and validation were obtained through the use of 53 IVOCT images supported by histology data from atherosclerotic New Zealand White rabbits. A pixel-wise classification accuracy of 87% and a two-dimensional region-based analysis accuracy of 92% (with sensitivity and specificity of 91% and 93%, respectively) were found, suggesting that a reliable automatic characterization of neointimal tissue was achieved. This may potentially expand the clinical value of IVOCT in assessing the completeness of stent healing and speed up the current analysis methodologies (which are, due to their time- and energy-consuming character, not suitable for application in large clinical trials and clinical practice), potentially allowing for a wider use of IVOCT technology.

  10. Automatic cloud coverage assessment of Formosat-2 image

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  11. A Digital Preclinical PET/MRI Insert and Initial Results.

    PubMed

    Weissler, Bjoern; Gebhardt, Pierre; Dueppenbecker, Peter M; Wehner, Jakob; Schug, David; Lerche, Christoph W; Goldschmidt, Benjamin; Salomon, Andre; Verel, Iris; Heijman, Edwin; Perkuhn, Michael; Heberling, Dirk; Botnar, Rene M; Kiessling, Fabian; Schulz, Volkmar

    2015-11-01

    Combining Positron Emission Tomography (PET) with Magnetic Resonance Imaging (MRI) results in a promising hybrid molecular imaging modality as it unifies the high sensitivity of PET for molecular and cellular processes with the functional and anatomical information from MRI. Digital Silicon Photomultipliers (dSiPMs) are the digital evolution in scintillation light detector technology and promise high PET SNR. DSiPMs from Philips Digital Photon Counting (PDPC) were used to develop a preclinical PET/RF gantry with 1-mm scintillation crystal pitch as an insert for clinical MRI scanners. With three exchangeable RF coils, the hybrid field of view has a maximum size of 160 mm × 96.6 mm (transaxial × axial). 0.1 ppm volume-root-mean-square B 0-homogeneity is kept within a spherical diameter of 96 mm (automatic volume shimming). Depending on the coil, MRI SNR is decreased by 13% or 5% by the PET system. PET count rates, energy resolution of 12.6% FWHM, and spatial resolution of 0.73 mm (3) (isometric volume resolution at isocenter) are not affected by applied MRI sequences. PET time resolution of 565 ps (FWHM) degraded by 6 ps during an EPI sequence. Timing-optimized settings yielded 260 ps time resolution. PET and MR images of a hot-rod phantom show no visible differences when the other modality was in operation and both resolve 0.8-mm rods. Versatility of the insert is shown by successfully combining multi-nuclei MRI ((1)H/(19)F) with simultaneously measured PET ((18)F-FDG). A longitudinal study of a tumor-bearing mouse verifies the operability, stability, and in vivo capabilities of the system. Cardiac- and respiratory-gated PET/MRI motion-capturing (CINE) images of the mouse heart demonstrate the advantage of simultaneous acquisition for temporal and spatial image registration.

  12. Dynamics of Auroras Conjugate to the Dayside Reconnection Region.

    NASA Astrophysics Data System (ADS)

    Mende, S. B.; Frey, H. U.; Doolittle, J. H.

    2006-12-01

    During periods of northward IMF Bz, observations of the IMAGE satellite FUV instrument demonstrated the existence of an auroral footprint of the dayside lobe reconnection region. Under these conditions the dayside "reconnection spot" is a distinct feature being separated from the dayside auroral oval. In the IMAGE data, ~100 km spatial and 2 minutes temporal resolution, this feature appeared as a modest size, 200 to 500 km in diameter, diffuse spot which was present steadily while the IMF conditions lasted and the solar wind particle pressure was large enough to create a detectable signature. Based on this evidence, dayside reconnection observed with this resolution appears to be a steady state process. There have been several attempts to identify and study the "reconnection foot print aurora" with higher resolution from the ground. South Pole Station and the network of the US Automatic Geophysical Observatories (AGO-s) in Antarctica have all sky imagers that monitor the latitude region of interest (70 to 85 degrees geomagnetic) near midday during the Antarctic winter. In this paper we present sequences of auroral images that were taken during different conditions of Bz and therefore they are high spatial resolution detailed views of the auroras associated with reconnection. During negative Bz, auroras appear to be dynamic with poleward moving auroral forms that are clearly observed by ground based imagers with a ~few km spatial resolution. During positive Bz however the extremely high latitude aurora is much more stable and shows no preferential meridional motions. It should be noted that winter solstice conditions, needed for ground based observations, produce a dipole tilt in which reconnection is not expected to be symmetric and the auroral signatures might favor the opposite hemisphere.

  13. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  14. Mapping Fishing Effort through AIS Data

    PubMed Central

    Natale, Fabrizio; Gibin, Maurizio; Alessandrini, Alfredo; Vespe, Michele; Paulrud, Anton

    2015-01-01

    Several research initiatives have been undertaken to map fishing effort at high spatial resolution using the Vessel Monitoring System (VMS). An alternative to the VMS is represented by the Automatic Identification System (AIS), which in the EU became compulsory in May 2014 for all fishing vessels of length above 15 meters. The aim of this paper is to assess the uptake of the AIS in the EU fishing fleet and the feasibility of producing a map of fishing effort with high spatial and temporal resolution at European scale. After analysing a large AIS dataset for the period January-August 2014 and covering most of the EU waters, we show that AIS was adopted by around 75% of EU fishing vessels above 15 meters of length. Using the Swedish fleet as a case study, we developed a method to identify fishing activity based on the analysis of individual vessels’ speed profiles and produce a high resolution map of fishing effort based on AIS data. The method was validated using detailed logbook data and proved to be sufficiently accurate and computationally efficient to identify fishing grounds and effort in the case of trawlers, which represent the largest portion of the EU fishing fleet above 15 meters of length. Issues still to be addressed before extending the exercise to the entire EU fleet are the assessment of coverage levels of the AIS data for all EU waters and the identification of fishing activity in the case of vessels other than trawlers. PMID:26098430

  15. Mapping Fishing Effort through AIS Data.

    PubMed

    Natale, Fabrizio; Gibin, Maurizio; Alessandrini, Alfredo; Vespe, Michele; Paulrud, Anton

    2015-01-01

    Several research initiatives have been undertaken to map fishing effort at high spatial resolution using the Vessel Monitoring System (VMS). An alternative to the VMS is represented by the Automatic Identification System (AIS), which in the EU became compulsory in May 2014 for all fishing vessels of length above 15 meters. The aim of this paper is to assess the uptake of the AIS in the EU fishing fleet and the feasibility of producing a map of fishing effort with high spatial and temporal resolution at European scale. After analysing a large AIS dataset for the period January-August 2014 and covering most of the EU waters, we show that AIS was adopted by around 75% of EU fishing vessels above 15 meters of length. Using the Swedish fleet as a case study, we developed a method to identify fishing activity based on the analysis of individual vessels' speed profiles and produce a high resolution map of fishing effort based on AIS data. The method was validated using detailed logbook data and proved to be sufficiently accurate and computationally efficient to identify fishing grounds and effort in the case of trawlers, which represent the largest portion of the EU fishing fleet above 15 meters of length. Issues still to be addressed before extending the exercise to the entire EU fleet are the assessment of coverage levels of the AIS data for all EU waters and the identification of fishing activity in the case of vessels other than trawlers.

  16. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83%. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7%. Results showed that vegetation cover and water features have been extracted completely (100%) and about 71% of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  17. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83 %. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7 %. Results showed that vegetation cover and water features have been extracted completely (100 %) and about 71 % of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  18. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    NASA Astrophysics Data System (ADS)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  19. Advances in detecting localized road damage due to sinkholes induced by engineering works using high resolution RASARSAT-2 data

    NASA Astrophysics Data System (ADS)

    Chen, J.; Zebker, H. A.; Lakshmi, V.

    2016-12-01

    Sinkholes often occur in karst terrains such as found in central and eastern Pennsylvania. Voids produced by dissolution of carbonate rocks can result in soil transport leading to localized, gradual or rapid, sinking of the land surface. A cluster of sinkholes developed in 2000 around a small rural community beside Bushkill creek near a limestone quarry, and severely destroyed road bridges and railway tracks. At a cost of $6 million, the Pennsylvania DoT replaced the bridge, which was damaged again in 2004 by newly developed sinkholes likely associated with quarry's pumping activity. Here we present high-resolution spaceborne interferometric radar images of sinkhole development on this community. We show that this technique may be used to monitor regions with high sinkhole damage risk and assist future infrastructure route planning, especially in rural areas where hydrogeologic information is limited. Specifically, we processed 66 RADARSAT-2 interferograms to extract deformation occurred over Bushkill creek between Jun. 2015 and Mar. 2016 with a temporal resolution of 24 days. We advanced recent persistent scatterer techniques to preserve meter-level spatial resolution in the interferograms while minimizing temporal decorrelation and phase unwrapping error. We observe periodic deformation due to pumping activity at the quarry and localized subsidence along Bushkill creek that is co-located with recent reported sinkholes. We plan to use the automatic processing techniques developed for this study to study road damage in another region in Pennsylvania, along Lewiston Narrows, and also to monitor urban infrastructure improvements in Seattle, both again with RASARSAT-2 data. Our results demonstrate that recent advances in satellite geodesy can be transferred to benefit society beyond the science community.

  20. Hydro-geomorphic connectivity and landslide features extraction to identifying potential threats and hazardous areas

    NASA Astrophysics Data System (ADS)

    Tarolli, Paolo; Fuller, Ian C.; Basso, Federica; Cavalli, Marco; Sofia, Giulia

    2017-04-01

    Hydro-geomorphic connectivity has significantly emerged as a new concept to understand the transfer of surface water and sediment through landscapes. A further scientific challenge is determining how the concept can be used to enable sustainable land and water management. This research proposes an interesting approach to integrating remote sensing techniques, connectivity theory, and geomorphometry based on high-resolution digital terrain model (HR-DTMs) to automatically extract landslides crowns and gully erosion, to determine the different rate of connectivity among the main extracted features and the river network, and thus determine a possible categorization of hazardous areas. The study takes place in two mountainous regions in the Wellington Region (New Zealand). The methodology is a three step approach. Firstly, we performed an automatic detection of the likely landslides crowns through the use of thresholds obtained by the statistical analysis of the variability of landform curvature. After that, the research considered the Connectivity Index to analyse how a complex and rugged topography induces large variations in erosion and sediment delivery in the two catchments. Lastly, the two methods have been integrated to create a unique procedure able to classify the different rate of connectivity among the main features and the river network and thus identifying potential threats and hazardous areas. The methodology is fast, and it can produce a detailed and updated inventory map that could be a key tool for erosional and sediment delivery hazard mitigation. This fast and simple method can be a useful tool to manage emergencies giving priorities to more failure-prone zones. Furthermore, it could be considered to do a preliminary interpretations of geomorphological phenomena and more in general, it could be the base to develop inventory maps. References Cavalli M, Trevisani S, Comiti F, Marchi L. 2013. Geomorphometric assessment of spatial sediment connectivity in small Alpine catchments. Geomorphology 188: 31-41 DOI: 10.1016/j.geomorph.2012.05.007 Sofia G, Dalla Fontana G, Tarolli P. 2014. High-resolution topography and anthropogenic feature extraction: testing geomorphometric parameters in floodplains. Hydrological Processes 28 (4): 2046-2061 DOI: 10.1002/hyp.9727 Tarolli P, Sofia G, Dalla Fontana G. 2012. Geomorphic features extraction from high-resolution topography: landslide crowns and bank erosion. Natural Hazards 61 (1): 65-83 DOI: 10.1007/s11069-010-9695-2

  1. Detection of Thermal Erosion Gullies from High-Resolution Images Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Huang, L.; Liu, L.; Jiang, L.; Zhang, T.; Sun, Y.

    2017-12-01

    Thermal erosion gullies, one type of thermokarst landforms, develop due to thawing of ice-rich permafrost. Mapping the location and extent of thermal erosion gullies can help understand the spatial distribution of thermokarst landforms and their temporal evolution. Remote sensing images provide an effective way for mapping thermokarst landforms, especially thermokarst lakes. However, thermal erosion gullies are challenging to map from remote sensing images due to their small sizes and significant variations in geometric/radiometric properties. It is feasible to manually identify these features, as a few previous studies have carried out. However manual methods are labor-intensive, therefore, cannot be used for a large study area. In this work, we conduct automatic mapping of thermal erosion gullies from high-resolution images by using Deep Learning. Our study area is located in Eboling Mountain (Qinghai, China). Within a 6 km2 peatland area underlain by ice-rich permafrost, at least 20 thermal erosional gullies are well developed. The image used is a 15-cm-resolution Digital Orthophoto Map (DOM) generated in July 2016. First, we extracted 14 gully patches and ten non-gully patches as training data. And we performed image augmentation. Next, we fine-tuned the pre-trained model of DeepLab, a deep-learning algorithm for semantic image segmentation based on Deep Convolutional Neural Networks. Then, we performed inference on the whole DOM and obtained intermediate results in forms of polygons for all identified gullies. At last, we removed misidentified polygons based on a few pre-set criteria on the size and shape of each polygon. Our final results include 42 polygons. Validated against field measurements using GPS, most of the gullies are detected correctly. There are 20 false detections due to the small number and low quality of training images. We also found three new gullies that missed in the field observations. This study shows that (1) despite a challenging mapping task, DeepLab can detect small, irregular-shaped thermal erosion gullies with high accuracy. (2) Automatic detection is critical for mapping thermal erosion gully since manual mapping or field work may miss some targets even in a relatively small region. (3) The quantity and quality of training data are crucial for detection accuracy.

  2. Silicon immersion gratings and their spectroscopic applications

    NASA Astrophysics Data System (ADS)

    Ge, Jian; Zhao, Bo; Powell, Scott; Fletcher, Adam; Wan, Xiaoke; Chang, Liang; Jakeman, Hali; Koukis, Dimitrios; Tanner, David B.; Ebbets, Dennis; Weinberg, Jonathan; Lipscy, Sarah; Nyquist, Rich; Bally, John

    2012-09-01

    Silicon immersion gratings (SIGs) offer several advantages over the commercial echelle gratings for high resolution infrared (IR) spectroscopy: 3.4 times the gain in dispersion or ~10 times the reduction in the instrument volume, a multiplex gain for a large continuous wavelength coverage and low cost. We present results from lab characterization of a large format SIG of astronomical observation quality. This SIG, with a 54.74 degree blaze angle (R1.4), 16.1 l/mm groove density, and 50x86 mm2 grating area, was developed for high resolution IR spectroscopy (R~70,000) in the near IR (1.1-2.5 μm). Its entrance surface was coated with a single layer of silicon nitride antireflection (AR) coating and its grating surface was coated with a thin layer of gold to increase its throughput at 1.1-2.5 μm. The lab measurements have shown that the SIG delivered a spectral resolution of R=114,000 at 1.55 μm with a lab testing spectrograph with a 20 mm diameter pupil. The measured peak grating efficiency is 72% at 1.55 μm, which is consistent with the measurements in the optical wavelengths from the grating surface at the air side. This SIG is being implemented in a new generation cryogenic IR spectrograph, called the Florida IR Silicon immersion grating spectrometer (FIRST), to offer broad-band high resolution IR spectroscopy with R=72,000 at 1.4-1.8 um under a typical seeing condition in a single exposure with a 2kx2k H2RG IR array at the robotically controlled Tennessee State University 2-meter Automatic Spectroscopic Telescope (AST) at Fairborn Observatory in Arizona. FIRST is designed to provide high precision Doppler measurements (~4 m/s) for the identification and characterization of extrasolar planets, especially rocky planets in habitable zones, orbiting low mass M dwarf stars. It will also be used for other high resolution IR spectroscopic observations of such as young stars, brown dwarfs, magnetic fields, star formation and interstellar mediums. An optimally designed SIG of the similar size can be used in the Silicon Immersion Grating Spectrometer (SIGS) to fill the need for high resolution spectroscopy at mid IR to far IR (~25-300 μm) for the NASA SOFIA airborne mission in the future.

  3. Combining Space-Based and In-Situ Measurements to Track Flooding in Thailand

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Doubleday, Joshua; Mclaren, David; Tran, Daniel; Tanpipat, Veerachai; Chitradon, Royal; Boonya-aaroonnet, Surajate; Thanapakpawin, Porranee; Khunboa, Chatchai; Leelapatra, Watis; hide

    2011-01-01

    We describe efforts to integrate in-situ sensing, space-borne sensing, hydrological modeling, active control of sensing, and automatic data product generation to enhance monitoring and management of flooding. In our approach, broad coverage sensors and missions such as MODIS, TRMM, and weather satellite information and in-situ weather and river gauging information are all inputs to track flooding via river basin and sub-basin hydrological models. While these inputs can provide significant information as to the major flooding, targetable space measurements can provide better spatial resolution measurements of flooding extent. In order to leverage such assets we automatically task observations in response to automated analysis indications of major flooding. These new measurements are automatically processed and assimilated with the other flooding data. We describe our ongoing efforts to deploy this system to track major flooding events in Thailand.

  4. On the role of conflict and control in social cognition: event-related brain potential investigations.

    PubMed

    Bartholow, Bruce D

    2010-03-01

    Numerous social-cognitive models posit that social behavior largely is driven by links between constructs in long-term memory that automatically become activated when relevant stimuli are encountered. Various response biases have been understood in terms of the influence of such "implicit" processes on behavior. This article reviews event-related potential (ERP) studies investigating the role played by cognitive control and conflict resolution processes in social-cognitive phenomena typically deemed automatic. Neurocognitive responses associated with response activation and conflict often are sensitive to the same stimulus manipulations that produce differential behavioral responses on social-cognitive tasks and that often are attributed to the role of automatic associations. Findings are discussed in the context of an overarching social cognitive neuroscience model in which physiological data are used to constrain social-cognitive theories.

  5. Evaluation of Decision Trees for Cloud Detection from AVHRR Data

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Nemani, Ramakrishna

    2005-01-01

    Automated cloud detection and tracking is an important step in assessing changes in radiation budgets associated with global climate change via remote sensing. Data products based on satellite imagery are available to the scientific community for studying trends in the Earth's atmosphere. The data products include pixel-based cloud masks that assign cloud-cover classifications to pixels. Many cloud-mask algorithms have the form of decision trees. The decision trees employ sequential tests that scientists designed based on empirical astrophysics studies and simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In a previous study we compared automatically learned decision trees to cloud masks included in Advanced Very High Resolution Radiometer (AVHRR) data products from the year 2000. In this paper we report the replication of the study for five-year data, and for a gold standard based on surface observations performed by scientists at weather stations in the British Islands. For our sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks p < 0.001.

  6. Multimodal system for the planning and guidance of bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  7. Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography

    PubMed Central

    Loss, Leandro A.; Bebis, George; Chang, Hang; Auer, Manfred; Sarkar, Purbasha; Parvin, Bahram

    2016-01-01

    Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides. PMID:28090597

  8. Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography.

    PubMed

    Loss, Leandro A; Bebis, George; Chang, Hang; Auer, Manfred; Sarkar, Purbasha; Parvin, Bahram

    2012-10-01

    Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides.

  9. DNA copy number, including telomeres and mitochondria, assayed using next-generation sequencing.

    PubMed

    Castle, John C; Biery, Matthew; Bouzek, Heather; Xie, Tao; Chen, Ronghua; Misura, Kira; Jackson, Stuart; Armour, Christopher D; Johnson, Jason M; Rohl, Carol A; Raymond, Christopher K

    2010-04-16

    DNA copy number variations occur within populations and aberrations can cause disease. We sought to develop an improved lab-automatable, cost-efficient, accurate platform to profile DNA copy number. We developed a sequencing-based assay of nuclear, mitochondrial, and telomeric DNA copy number that draws on the unbiased nature of next-generation sequencing and incorporates techniques developed for RNA expression profiling. To demonstrate this platform, we assayed UMC-11 cells using 5 million 33 nt reads and found tremendous copy number variation, including regions of single and homogeneous deletions and amplifications to 29 copies; 5 times more mitochondria and 4 times less telomeric sequence than a pool of non-diseased, blood-derived DNA; and that UMC-11 was derived from a male individual. The described assay outputs absolute copy number, outputs an error estimate (p-value), and is more accurate than array-based platforms at high copy number. The platform enables profiling of mitochondrial levels and telomeric length. The assay is lab-automatable and has a genomic resolution and cost that are tunable based on the number of sequence reads.

  10. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    PubMed Central

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-01-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment. PMID:27922592

  11. Graph-based active learning of agglomeration (GALA): a Python library to segment 2D and 3D neuroimages

    PubMed Central

    Nunez-Iglesias, Juan; Kennedy, Ryan; Plaza, Stephen M.; Chakraborty, Anirban; Katz, William T.

    2014-01-01

    The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them. PMID:24772079

  12. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    NASA Astrophysics Data System (ADS)

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-12-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  13. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification.

    PubMed

    Bradbury, Kyle; Saboo, Raghav; L Johnson, Timothy; Malof, Jordan M; Devarajan, Arjun; Zhang, Wuming; M Collins, Leslie; G Newell, Richard

    2016-12-06

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  14. bcgTree: automatized phylogenetic tree building from bacterial core genomes.

    PubMed

    Ankenbrand, Markus J; Keller, Alexander

    2016-10-01

    The need for multi-gene analyses in scientific fields such as phylogenetics and DNA barcoding has increased in recent years. In particular, these approaches are increasingly important for differentiating bacterial species, where reliance on the standard 16S rDNA marker can result in poor resolution. Additionally, the assembly of bacterial genomes has become a standard task due to advances in next-generation sequencing technologies. We created a bioinformatic pipeline, bcgTree, which uses assembled bacterial genomes either from databases or own sequencing results from the user to reconstruct their phylogenetic history. The pipeline automatically extracts 107 essential single-copy core genes, found in a majority of bacteria, using hidden Markov models and performs a partitioned maximum-likelihood analysis. Here, we describe the workflow of bcgTree and, as a proof-of-concept, its usefulness in resolving the phylogeny of 293 publically available bacterial strains of the genus Lactobacillus. We also evaluate its performance in both low- and high-level taxonomy test sets. The tool is freely available at github ( https://github.com/iimog/bcgTree ) and our institutional homepage ( http://www.dna-analytics.biozentrum.uni-wuerzburg.de ).

  15. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction

    NASA Astrophysics Data System (ADS)

    Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G.

    2015-02-01

    Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pléiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

  16. DNA copy number, including telomeres and mitochondria, assayed using next-generation sequencing

    PubMed Central

    2010-01-01

    Background DNA copy number variations occur within populations and aberrations can cause disease. We sought to develop an improved lab-automatable, cost-efficient, accurate platform to profile DNA copy number. Results We developed a sequencing-based assay of nuclear, mitochondrial, and telomeric DNA copy number that draws on the unbiased nature of next-generation sequencing and incorporates techniques developed for RNA expression profiling. To demonstrate this platform, we assayed UMC-11 cells using 5 million 33 nt reads and found tremendous copy number variation, including regions of single and homogeneous deletions and amplifications to 29 copies; 5 times more mitochondria and 4 times less telomeric sequence than a pool of non-diseased, blood-derived DNA; and that UMC-11 was derived from a male individual. Conclusion The described assay outputs absolute copy number, outputs an error estimate (p-value), and is more accurate than array-based platforms at high copy number. The platform enables profiling of mitochondrial levels and telomeric length. The assay is lab-automatable and has a genomic resolution and cost that are tunable based on the number of sequence reads. PMID:20398377

  17. Nucleus segmentation in histology images with hierarchical multilevel thresholding

    NASA Astrophysics Data System (ADS)

    Ahmady Phoulady, Hady; Goldgof, Dmitry B.; Hall, Lawrence O.; Mouton, Peter R.

    2016-03-01

    Automatic segmentation of histological images is an important step for increasing throughput while maintaining high accuracy, avoiding variation from subjective bias, and reducing the costs for diagnosing human illnesses such as cancer and Alzheimer's disease. In this paper, we present a novel method for unsupervised segmentation of cell nuclei in stained histology tissue. Following an initial preprocessing step involving color deconvolution and image reconstruction, the segmentation step consists of multilevel thresholding and a series of morphological operations. The only parameter required for the method is the minimum region size, which is set according to the resolution of the image. Hence, the proposed method requires no training sets or parameter learning. Because the algorithm requires no assumptions or a priori information with regard to cell morphology, the automatic approach is generalizable across a wide range of tissues. Evaluation across a dataset consisting of diverse tissues, including breast, liver, gastric mucosa and bone marrow, shows superior performance over four other recent methods on the same dataset in terms of F-measure with precision and recall of 0.929 and 0.886, respectively.

  18. Automatic scoring of dicentric chromosomes as a tool in large scale radiation accidents.

    PubMed

    Romm, H; Ainsbury, E; Barnard, S; Barrios, L; Barquinero, J F; Beinke, C; Deperas, M; Gregoire, E; Koivistoinen, A; Lindholm, C; Moquet, J; Oestreicher, U; Puig, R; Rothkamm, K; Sommer, S; Thierens, H; Vandersickel, V; Vral, A; Wojcik, A

    2013-08-30

    Mass casualty scenarios of radiation exposure require high throughput biological dosimetry techniques for population triage in order to rapidly identify individuals who require clinical treatment. The manual dicentric assay is a highly suitable technique, but it is also very time consuming and requires well trained scorers. In the framework of the MULTIBIODOSE EU FP7 project, semi-automated dicentric scoring has been established in six European biodosimetry laboratories. Whole blood was irradiated with a Co-60 gamma source resulting in 8 different doses between 0 and 4.5Gy and then shipped to the six participating laboratories. To investigate two different scoring strategies, cell cultures were set up with short term (2-3h) or long term (24h) colcemid treatment. Three classifiers for automatic dicentric detection were applied, two of which were developed specifically for these two different culture techniques. The automation procedure included metaphase finding, capture of cells at high resolution and detection of dicentric candidates. The automatically detected dicentric candidates were then evaluated by a trained human scorer, which led to the term 'semi-automated' being applied to the analysis. The six participating laboratories established at least one semi-automated calibration curve each, using the appropriate classifier for their colcemid treatment time. There was no significant difference between the calibration curves established, regardless of the classifier used. The ratio of false positive to true positive dicentric candidates was dose dependent. The total staff effort required for analysing 150 metaphases using the semi-automated approach was 2 min as opposed to 60 min for manual scoring of 50 metaphases. Semi-automated dicentric scoring is a useful tool in a large scale radiation accident as it enables high throughput screening of samples for fast triage of potentially exposed individuals. Furthermore, the results from the participating laboratories were comparable which supports networking between laboratories for this assay. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Fan; Wang, Yuanqing, E-mail: yqwang@nju.edu.cn; Li, Fenfang

    The avalanche-photodiode-array (APD-array) laser detection and ranging (LADAR) system has been continually developed owing to its superiority of nonscanning, large field of view, high sensitivity, and high precision. However, how to achieve higher-efficient detection and better integration of the LADAR system for real-time three-dimensional (3D) imaging continues to be a problem. In this study, a novel LADAR system using four linear mode APDs (LmAPDs) is developed for high-efficient detection by adopting a modulation and multiplexing technique. Furthermore, an automatic control system for the array LADAR system is proposed and designed by applying the virtual instrumentation technique. The control system aimsmore » to achieve four functions: synchronization of laser emission and rotating platform, multi-channel synchronous data acquisition, real-time Ethernet upper monitoring, and real-time signal processing and 3D visualization. The structure and principle of the complete system are described in the paper. The experimental results demonstrate that the LADAR system is capable of achieving real-time 3D imaging on an omnidirectional rotating platform under the control of the virtual instrumentation system. The automatic imaging LADAR system utilized only 4 LmAPDs to achieve 256-pixel-per-frame detection with by employing 64-bit demodulator. Moreover, the lateral resolution is ∼15 cm and range accuracy is ∼4 cm root-mean-square error at a distance of ∼40 m.« less

  20. Forest Stand Segmentation Using Airborne LIDAR Data and Very High Resolution Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet, Valérie; Hervieu, Alexandre

    2016-06-01

    Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., ≥ 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with α-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).

  1. Impact of four-dimensional data assimilation (FDDA) on urban climate analysis

    NASA Astrophysics Data System (ADS)

    Pan, Linlin; Liu, Yubao; Liu, Yuewei; Li, Lei; Jiang, Yin; Cheng, Will; Roux, Gregory

    2015-12-01

    This study investigates the impact of four-dimensional data assimilation (FDDA) on urban climate analysis, which employs the NCAR (National Center for Atmospheric Research) WRF (the weather research and forecasting model) based on climate FDDA (CFDDA) technology to develop an urban-scale microclimatology database for the Shenzhen area, a rapidly developing metropolitan located along the southern coast of China, where uniquely high-density observations, including ultrahigh-resolution surface AWS (automatic weather station) network, radio sounding, wind profilers, radiometers, and other weather observation platforms, have been installed. CFDDA is an innovative dynamical downscaling regional climate analysis system that assimilates diverse regional observations; and has been employed to produce a 5 year multiscale high-resolution microclimate analysis by assimilating high-density observations at Shenzhen area. The CFDDA system was configured with four nested-grid domains at grid sizes of 27, 9, 3, and 1 km, respectively. This research evaluates the impact of assimilating high-resolution observation data on reproducing the refining features of urban-scale circulations. Two experiments were conducted with a 5 year run using CFSR (climate forecast system reanalysis) as boundary and initial conditions: one with CFDDA and the other without. The comparisons of these two experiments with observations indicate that CFDDA greatly reduces the model analysis error and is able to realistically analyze the microscale features such as urban-rural-coastal circulation, land/sea breezes, and local-hilly terrain thermal circulations. It is demonstrated that the urbanization can produce 2.5 k differences in 2 m temperatures, delays/speeds up the land/sea breeze development, and interacts with local mountain-valley circulations.

  2. Automatic detection of blurred images in UAV image sets

    NASA Astrophysics Data System (ADS)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.

  3. CFD Analysis and Design Optimization Using Parallel Computers

    NASA Technical Reports Server (NTRS)

    Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James

    1997-01-01

    A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.

  4. Automated AFM for small-scale and large-scale surface profiling in CMP applications

    NASA Astrophysics Data System (ADS)

    Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il

    2018-03-01

    As the feature size is shrinking in the foundries, the need for inline high resolution surface profiling with versatile capabilities is increasing. One of the important areas of this need is chemical mechanical planarization (CMP) process. We introduce a new generation of atomic force profiler (AFP) using decoupled scanners design. The system is capable of providing small-scale profiling using XY scanner and large-scale profiling using sliding stage. Decoupled scanners design enables enhanced vision which helps minimizing the positioning error for locations of interest in case of highly polished dies. Non-Contact mode imaging is another feature of interest in this system which is used for surface roughness measurement, automatic defect review, and deep trench measurement. Examples of the measurements performed using the atomic force profiler are demonstrated.

  5. Automatic Perceptual Color Map Generation for Realistic Volume Visualization

    PubMed Central

    Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor

    2008-01-01

    Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609

  6. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters

    PubMed Central

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  7. Automatic Event Detection in Search for Inter-Moss Loops in IRIS Si IV Slit-Jaw Images

    NASA Technical Reports Server (NTRS)

    Fayock, Brian; Winebarger, Amy R.; De Pontieu, Bart

    2015-01-01

    The high-resolution capabilities of the Interface Region Imaging Spectrometer (IRIS) mission have allowed the exploration of the finer details of the solar magnetic structure from the chromosphere to the lower corona that have previously been unresolved. Of particular interest to us are the relatively short-lived, low-lying magnetic loops that have foot points in neighboring moss regions. These inter-moss loops have also appeared in several AIA pass bands, which are generally associated with temperatures that are at least an order of magnitude higher than that of the Si IV emission seen in the 1400 angstrom pass band of IRIS. While the emission lines seen in these pass bands can be associated with a range of temperatures, the simultaneous appearance of these loops in IRIS 1400 and AIA 171, 193, and 211 suggest that they are not in ionization equilibrium. To study these structures in detail, we have developed a series of algorithms to automatically detect signal brightening or events on a pixel-by-pixel basis and group them together as structures for each of the above data sets. These algorithms have successfully picked out all activity fitting certain adjustable criteria. The resulting groups of events are then statistically analyzed to determine which characteristics can be used to distinguish the inter-moss loops from all other structures. While a few characteristic histograms reveal that manually selected inter-moss loops lie outside the norm, a combination of several characteristics will need to be used to determine the statistical likelihood that a group of events be categorized automatically as a loop of interest. The goal of this project is to be able to automatically pick out inter-moss loops from an entire data set and calculate the characteristics that have previously been determined manually, such as length, intensity, and lifetime. We will discuss the algorithms, preliminary results, and current progress of automatic characterization.

  8. Automatic Picking of Foraminifera: Design of the Foraminifera Image Recognition and Sorting Tool (FIRST) Prototype and Results of the Image Classification Scheme

    NASA Astrophysics Data System (ADS)

    de Garidel-Thoron, T.; Marchant, R.; Soto, E.; Gally, Y.; Beaufort, L.; Bolton, C. T.; Bouslama, M.; Licari, L.; Mazur, J. C.; Brutti, J. M.; Norsa, F.

    2017-12-01

    Foraminifera tests are the main proxy carriers for paleoceanographic reconstructions. Both geochemical and taxonomical studies require large numbers of tests to achieve statistical relevance. To date, the extraction of foraminifera from the sediment coarse fraction is still done by hand and thus time-consuming. Moreover, the recognition of morphotypes, ecologically relevant, requires some taxonomical skills not easily taught. The automatic recognition and extraction of foraminifera would largely help paleoceanographers to overcome these issues. Recent advances in automatic image classification using machine learning opens the way to automatic extraction of foraminifera. Here we detail progress on the design of an automatic picking machine as part of the FIRST project. The machine handles 30 pre-sieved samples (100-1000µm), separating them into individual particles (including foraminifera) and imaging each in pseudo-3D. The particles are classified and specimens of interest are sorted either for Individual Foraminifera Analyses (44 per slide) and/or for classical multiple analyses (8 morphological classes per slide, up to 1000 individuals per hole). The classification is based on machine learning using Convolutional Neural Networks (CNNs), similar to the approach used in the coccolithophorid imaging system SYRACO. To prove its feasibility, we built two training image datasets of modern planktonic foraminifera containing approximately 2000 and 5000 images each, corresponding to 15 & 25 morphological classes. Using a CNN with a residual topology (ResNet) we achieve over 95% correct classification for each dataset. We tested the network on 160,000 images from 45 depths of a sediment core from the Pacific ocean, for which we have human counts. The current algorithm is able to reproduce the downcore variability in both Globigerinoides ruber and the fragmentation index (r2 = 0.58 and 0.88 respectively). The FIRST prototype yields some promising results for high-resolution paleoceanographic studies and evolutionary studies.

  9. Digital disaster evaluation and its application to 2015 Ms 8.1 Nepal Earthquake

    NASA Astrophysics Data System (ADS)

    WANG, Xiaoqing; LV, Jinxia; DING, Xiang; DOU, Aixia

    2016-11-01

    The purpose of the article is to probe the technique resolution of disaster information extraction and evaluation from the digital RS images based on the internet environment and aided by the social and geographic information. The solution is composed with such methods that the fast post-disaster assessment system will assess automatically the disaster area and grade, the multi-phase satellite and airborne high resolution digital RS images will provide the basis to extract the disaster areas or spots, assisted by the fast position of potential serious damage risk targets according to the geographic, administrative, population, buildings and other information in the estimated disaster region, the 2D digital map system or 3D digital earth system will provide platforms to interpret cooperatively the damage information in the internet environment, and further to estimate the spatial distribution of damage index or intensity, casualties or economic losses, which are very useful for the decision-making of emergency rescue and disaster relief, resettlement and reconstruction. The spatial seismic damage distribution of 2015 Ms 8.1 Nepal earthquake, as an example of the above solution, is evaluated by using the high resolution digital RS images, auxiliary geographic information and ground survey. The results are compared with the statistical disaster information issued by the ground truth by field surveying, and show good consistency.

  10. Detecting tents to estimate the displaced populations for post-disaster relief using high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Wang, Shifeng; So, Emily; Smith, Pete

    2015-04-01

    Estimating the number of refugees and internally displaced persons is important for planning and managing an efficient relief operation following disasters and conflicts. Accurate estimates of refugee numbers can be inferred from the number of tents. Extracting tents from high-resolution satellite imagery has recently been suggested. However, it is still a significant challenge to extract tents automatically and reliably from remote sensing imagery. This paper describes a novel automated method, which is based on mathematical morphology, to generate a camp map to estimate the refugee numbers by counting tents on the camp map. The method is especially useful in detecting objects with a clear shape, size, and significant spectral contrast with their surroundings. Results for two study sites with different satellite sensors and different spatial resolutions demonstrate that the method achieves good performance in detecting tents. The overall accuracy can be up to 81% in this study. Further improvements should be possible if over-identified isolated single pixel objects can be filtered. The performance of the method is impacted by spectral characteristics of satellite sensors and image scenes, such as the extent of area of interest and the spatial arrangement of tents. It is expected that the image scene would have a much higher influence on the performance of the method than the sensor characteristics.

  11. Visual Uav Trajectory Plan System Based on Network Map

    NASA Astrophysics Data System (ADS)

    Li, X. L.; Lin, Z. J.; Su, G. Z.; Wu, B. Y.

    2012-07-01

    The base map of the current software UP-30 using in trajectory plan for Unmanned Aircraft Vehicle is vector diagram. UP-30 draws navigation points manually. But in the field of operation process, the efficiency and the quality of work is influenced because of insufficient information, screen reflection, calculate inconveniently and other factors. If we do this work in indoor, the effect of external factors on the results would be eliminated, the network earth users can browse the free world high definition satellite images through downloading a client software, and can export the high resolution image by standard file format. This brings unprecedented convenient of trajectory plan. But the images must be disposed by coordinate transformation, geometric correction. In addition, according to the requirement of mapping scale ,camera parameters and overlap degree we can calculate exposure hole interval and trajectory distance between the adjacent trajectory automatically . This will improve the degree of automation of data collection. Software will judge the position of next point according to the intersection of the trajectory and the survey area and ensure the position of point according to trajectory distance. We can undertake the points artificially. So the trajectory plan is automatic and flexible. Considering safety, the date can be used in flying after simulating flight. Finally we can export all of the date using a key

  12. Improving dust emission characterization in dust models using dynamic high-resolution geomorphic erodibility map

    NASA Astrophysics Data System (ADS)

    Parajuli, S. P.; Yang, Z.; Kocurek, G.

    2013-12-01

    Dust is known to affect the earth radiation budget, biogeochemical cycle, precipitation, human health and visibility. Despite the increased research effort, dust emission modeling remains challenging because dust emission is affected by complex geomorphological processes. Existing dust models overestimate dust emission and rely on tuning and a static erodibility factor in order to make simulated results comparable to remote sensing and ground-based observations. In most of current models, dust emission is expressed in terms of threshold friction speed, which ultimately depends mainly upon the percentage clay content and soil moisture. Unfortunately, due to the unavailability of accurate and high resolution input data of the clay content and soil moisture, estimated threshold friction speed commonly does not represent the variability in field condition. In this work, we attempt to improve dust emission characterization by developing a high resolution geomorphic map of the Middle East and North Africa (MENA), which is responsible for more than 50% of global dust emission. We develop this geomorphic map by visually examining high resolution satellite images obtained from Google Earth Pro and ESRI base map. Albeit subjective, our technique is more reliable compared to automatic image classification technique because we incorporate knowledge of geological/geographical setting in identifying dust sources. We hypothesize that the erodibility is unique for different geomorphic landforms and that it can be quantified by the correlation between observed wind speed and satellite retrieved aerosol optical depth (AOD). We classify the study area into several key geomorphological categories with respect to their dust emission potential. Then we quantify their dust emission potential using the correlation between observed wind speed and satellite retrieved AOD. The dynamic, high-resolution geomorphic erodibility map thus prepared will help to reduce the uncertainty in current dust models associated with poor characterization of dust sources. The baseline dust scheme used in this study is the Dust Entrainment and Deposition (DEAD) model, which is also a component of the community land model (CLM). Proposed improvements in the dust emission representation will help to better understand the accurate effect of dust on climate processes.

  13. Hyperspectral classification of grassland species: towards a UAS application for semi-automatic field surveys

    NASA Astrophysics Data System (ADS)

    Lopatin, Javier; Fassnacht, Fabian E.; Kattenborn, Teja; Schmidtlein, Sebastian

    2017-04-01

    Grasslands are one of the ecosystems that have been strongly intervened during the past decades due to anthropogenic impacts, affecting their structural and functional composition. To monitor the spatial and/or temporal changes of these environments, a reliable field survey is first needed. As quality relevés are usually expensive and time consuming, the amount of information available is usually poor or not well spatially distributed at the regional scale. In the present study, we investigate the possibility of a semi-automated method used for repeated surveys of monitoring sites. We analyze the applicability of very high spatial resolution hyperspectral data to classify grassland species at the level of individuals. The AISA+ imaging spectrometer mounted on a scaffold was applied to scan 1 m2 grassland plots and assess the impact of four sources of variation on the predicted species cover: (1) the spatial resolution of the scans, (2) the species number and structural diversity, (3) the species cover, and (4) the species functional types (bryophytes, forbs and graminoids). We found that the spatial resolution and the diversity level (mainly structural diversity) were the most important source of variation for the proposed approach. A spatial resolution below 1 cm produced relatively high model performances, while predictions with pixel sizes over that threshold produced non adequate results. Areas with low interspecies overlap reached classification median values of 0.8 (kappa). On the contrary, results were not satisfactory in plots with frequent interspecies overlap in multiple layers. By means of a bootstrapping procedure, we found that areas with shadows and mixed pixels introduce uncertainties into the classification. We conclude that the application of very high resolution hyperspectral remote sensing as a robust alternative or supplement to field surveys is possible for environments with low structural heterogeneity. This study presents the first try of a full classification of grassland species at the individuum level using spectral data.

  14. VIIRS Data and Data Access at the NASA National Snow and Ice Data Center Distributed Active Archive Center

    NASA Astrophysics Data System (ADS)

    Moth, P.; Johnston, T.; Fowler, D. K.

    2017-12-01

    Working collaboratively, NASA and NOAA are producing data from the Visible Infrared Imaging Radiometer Suite (VIIRS). The National Snow and Ice Data Center (NSIDC), a NASA Distributed Active Archive Center (DAAC), is distributing VIIRS snow cover, ice surface temperature, and sea ice cover products. Data is available in .nc and HDF5 formats with a temporal coverage of 1 January 2012 and onward. VIIRS, NOAA's latest radiometer, was launched aboard the Suomi National Polar-orbiting Partnership (SNPP) satellite on October 28, 2011. The instrument comprises 22 bands; five for high-resolution imagery, 16 at moderate resolution, and one panchromatic day/night band. VIIRS is a whiskbroom scanning radiometer that covers the spectrum between 0.412 μm and 12.01 μm and acquires spatial resolutions at nadir of 750 m, 375 m, and 750 m, respectively. One distinct advantage of VIIRS is to ensure continuity that will lead to the development of snow and sea ice climate data records with data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on the NASA Earth Observing System (EOS) Aqua and Terra satellites. Combined with the Advanced Very-High-resolution Radiometer (AVHRR), the AVHRR-MODIS-VIIRS timeline will start in the early 1980s and span at least four decades-and perhaps beyond-enabling researchers to produce and gain valuable insight from long, high-quality Earth System Data Records (ESDRs). Several options are available to view and download VIIRS data: Direct download from NSIDC via HTTPS. Using NASA Earthdata Search, users can explore and download VIIRS data with temporal and/or spatial filters, re-format, re-project, and subset by spatial extent and parameter. API access is also available for all these options; Using NASA Worldview, users can view Global Imagery Browse Services (GIBS) from VIIRS data; Users can join a VIIRS subscription list to have new VIIRS data automatically ftp'd or staged on a local server as it is archived at NSIDC.

  15. Mapping turbidity in the Charles River, Boston using a high-resolution satellite.

    PubMed

    Hellweger, Ferdi L; Miller, Will; Oshodi, Kehinde Sarat

    2007-09-01

    The usability of high-resolution satellite imagery for estimating spatial water quality patterns in urban water bodies is evaluated using turbidity in the lower Charles River, Boston as a case study. Water turbidity was surveyed using a boat-mounted optical sensor (YSI) at 5 m spatial resolution, resulting in about 4,000 data points. The ground data were collected coincidently with a satellite imagery acquisition (IKONOS), which consists of multispectral (R, G, B) reflectance at 1 m resolution. The original correlation between the raw ground and satellite data was poor (R2 = 0.05). Ground data were processed by removing points affected by contamination (e.g., sensor encounters a particle floc), which were identified visually. Also, the ground data were corrected for the memory effect introduced by the sensor's protective casing using an analytical model. Satellite data were processed to remove pixels affected by permanent non-water features (e.g., shoreline). In addition, water pixels within a certain buffer distance from permanent non-water features were removed due to contamination by the adjacency effect. To determine the appropriate buffer distance, a procedure that explicitly considers the distance of pixels to the permanent non-water features was applied. Two automatic methods for removing the effect of temporary non-water features (e.g., boats) were investigated, including (1) creating a water-only mask based on an unsupervised classification and (2) removing (filling) all local maxima in reflectance. After the various processing steps, the correlation between the ground and satellite data was significantly better (R2 = 0.70). The correlation was applied to the satellite image to develop a map of turbidity in the lower Charles River, which reveals large-scale patterns in water clarity. However, the adjacency effect prevented the application of this method to near-shore areas, where high-resolution patterns were expected (e.g., outfall plumes).

  16. Automated segmentations of skin, soft-tissue, and skeleton, from torso CT images

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Yokoyama, Ryujiro; Kiryu, Takuji; Hoshi, Hiroaki

    2004-05-01

    We have been developing a computer-aided diagnosis (CAD) scheme for automatically recognizing human tissue and organ regions from high-resolution torso CT images. We show some initial results for extracting skin, soft-tissue and skeleton regions. 139 patient cases of torso CT images (male 92, female 47; age: 12-88) were used in this study. Each case was imaged with a common protocol (120kV/320mA) and covered the whole torso with isotopic spatial resolution of about 0.63 mm and density resolution of 12 bits. A gray-level thresholding based procedure was applied to separate the human body from background. The density and distance features to body surface were used to determine the skin, and separate soft-tissue from the others. A 3-D region growing based method was used to extract the skeleton. We applied this system to the 139 cases and found that the skin, soft-tissue and skeleton regions were recognized correctly for 93% of the patient cases. The accuracy of segmentation results was acceptable by evaluating the results slice by slice. This scheme will be included in CAD systems for detecting and diagnosing the abnormal lesions in multi-slice torso CT images.

  17. Algorithm design of liquid lens inspection system

    NASA Astrophysics Data System (ADS)

    Hsieh, Lu-Lin; Wang, Chun-Chieh

    2008-08-01

    In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.

  18. Control of the TSU 2-m automatic telescope

    NASA Astrophysics Data System (ADS)

    Eaton, Joel A.; Williamson, Michael H.

    2004-09-01

    Tennessee State University is operating a 2-m automatic telescope for high-dispersion spectroscopy. The alt-azimuth telescope is fiber-coupled to a conventional echelle spectrograph with two resolutions (R=30,000 and 70,000). We control this instrument with four computers running linux and communicating over ethernet through the UDP protocol. A computer physically located on the telescope handles the acquisition and tracking of stars. We avoid the need for real-time programming in this application by periodically latching the positions of the axes in a commercial motion controller and the time in a GPS receiver. A second (spectrograph) computer sets up the spectrograph and runs its CCD, a third (roof) computer controls the roll-off roof and front flap of the telescope enclosure, and the fourth (executive) computer makes decisions about which stars to observe and when to close the observatory for bad weather. The only human intervention in the telescope's operation involves changing the observing program, copying data back to TSU, and running quality-control checks on the data. It has been running reliably in this completely automatic, unattended mode for more than a year with all day-to-day adminsitration carried out over the Internet. To support automatic operation, we have written a number of useful tools to predict and analyze what the telescope does. These include a simulator that predicts roughly how the telescope will operate on a given night, a quality-control program to parse logfiles from the telescope and identify problems, and a rescheduling program that calculates new priorities to keep the frequency of observation for the various stars roughly as desired. We have also set up a database to keep track of the tens of thousands of spectra we expect to get each year.

  19. Monitoring and localization hydrocarbon and sulfur oxides emissions by SRS-lidar

    NASA Astrophysics Data System (ADS)

    Zhevlakov, A. P.; Konopelko, L. P.; Bespalov, V. G.; Elizarov, V. V.; Grishkanich, A. S.; Redka, D. N.; Bogoslovsky, S. A.; Il'inskiy, A. A.; Chubchenko, Y. K.

    2017-10-01

    We developed a Raman lidar with ultraspectral resolution for automatic airborne monitoring of pipeline leaks and for oil and gas exploration. Test flights indicate that a sensitivity of 6 ppm for methane and 2 ppm for hydrogen sulfide has been reached for leakage detection.

  20. Oil spill disasters detection and monitoring by optical satellite data

    NASA Astrophysics Data System (ADS)

    Livia Grimaldi, Caterina Sara; Coviello, Irina; Lacava, Teodosio; Pergola, Nicola; Tramutoli, Valerio

    2010-05-01

    Marine oil spill disasters may be related to natural hazards, when storms and hurricanes cause the sinking of tankers carrying crude or refined oil, as well as to human action, as illegal discharges, assessment errors (failures or collisions) or acts of warfare. Their consequence has a devastating effects on the marine and coastal environment. In order to reduce the environmental impact of such kind of hazard, giving to local authorities necessary information of pollution entity and evolution, timely detection and continuously updated information are fundamental. Satellite remote sensing can give a significant contribution in such a direction. Nowadays, SAR (Synthetic Aperture Radar) technology has been recognized as the most efficient for oil spill detection and description, thanks to the high spatial resolution and all-time/weather capability of the present operational sensors. Anyway, the actual SARs revisiting time does not allow a rapid detection and near real-time monitoring of these phenomena at global scale. The COSMO-Skymed Italian dual-mission (expected in the 2010) will overcome this limitation improving the temporal resolution until 12 hours by a SAR constellation of four satellites, but several open questions regarding costs and global delivery policy of such data, might prevent their use in an operational context. Passive optical sensors, on board meteorological satellites, thanks to their high temporal resolution (from a few hours to 15 minutes, depending on the characteristics of the platform/sensor), may represent, at this moment, a suitable SAR alternative/complement for oil spill detection and monitoring. Up to now, some techniques have been proposed for mapping known oil spill discharges monitoring using optical satellite data, on the other hand, reliable satellite methods for an automatic and timely detection of oil spill are still currently missing. Existing methods, in fact, can localize the presence of an oil spill only after an alert and require the presence of a qualified operator. Recently, an innovative technique for near real time oil spill detection and monitoring has been proposed. The technique is based on the general RST (Robust Satellite Technique) approach which exploits long-term multi-temporal satellite records in order to obtain a former characterization of the measured signal, in terms of expected value and natural variability, providing a further identification of signal anomalies by an automatic, unsupervised change detection step. Results obtained by using both AVHRR (Advanced Very High Resolution Radiometer) and MODIS (Moderate Resolution Imaging Spectroradiometer) data in different geographic areas and observational conditions demonstrate excellent detection capabilities both in term of sensitivity (to the presence even of very thin/old oil films) and reliability (up to zero occurrence of false alarms) mainly due to the RST invariance regardless of local and environmental conditions. Moreover, the possibility to apply RST approach to both MODIS and AVHRR sensors may ensure an improved (up to 3 hours and less) frequency of TIR (Thermal Infrared) observations as well as an increased spatial accuracy of the description of oil spills (thanks to higher spatial resolution of MODIS visible channels). In this paper, results obtained applying the proposed methodology to events of different extension and in different geographic areas are shown and discussed.

Top