Sample records for image processing community

  1. RayPlus: a Web-Based Platform for Medical Image Processing.

    PubMed

    Yuan, Rong; Luo, Ming; Sun, Zhi; Shi, Shuyue; Xiao, Peng; Xie, Qingguo

    2017-04-01

    Medical image can provide valuable information for preclinical research, clinical diagnosis, and treatment. As the widespread use of digital medical imaging, many researchers are currently developing medical image processing algorithms and systems in order to accommodate a better result to clinical community, including accurate clinical parameters or processed images from the original images. In this paper, we propose a web-based platform to present and process medical images. By using Internet and novel database technologies, authorized users can easily access to medical images and facilitate their workflows of processing with server-side powerful computing performance without any installation. We implement a series of algorithms of image processing and visualization in the initial version of Rayplus. Integration of our system allows much flexibility and convenience for both research and clinical communities.

  2. Image Re-Ranking Based on Topic Diversity.

    PubMed

    Qian, Xueming; Lu, Dan; Wang, Yaxiong; Zhu, Li; Tang, Yuan Yan; Wang, Meng

    2017-08-01

    Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.

  3. Iplt--image processing library and toolkit for the electron microscopy community.

    PubMed

    Philippsen, Ansgar; Schenk, Andreas D; Stahlberg, Henning; Engel, Andreas

    2003-01-01

    We present the foundation for establishing a modular, collaborative, integrated, open-source architecture for image processing of electron microscopy images, named iplt. It is designed around object oriented paradigms and implemented using the programming languages C++ and Python. In many aspects it deviates from classical image processing approaches. This paper intends to motivate developers within the community to participate in this on-going project. The iplt homepage can be found at http://www.iplt.org.

  4. Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications

    NASA Technical Reports Server (NTRS)

    Redmann, G. H.

    1976-01-01

    Recent advances in image processing and new applications are presented to the user community to stimulate the development and transfer of this technology to industrial and commercial applications. The Proceedings contains 37 papers and abstracts, including many illustrations (some in color) and provides a single reference source for the user community regarding the ordering and obtaining of NASA-developed image-processing software and science data.

  5. Developing an ANSI standard for image quality tools for the testing of active millimeter wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Greca, Joseph; Yam, Kevin; Weatherall, James C.; Smith, Peter R.; Smith, Barry T.

    2017-05-01

    In 2016, the millimeter wave (MMW) imaging community initiated the formation of a standard for millimeter wave image quality metrics. This new standard, American National Standards Institute (ANSI) N42.59, will apply to active MMW systems for security screening of humans. The Electromagnetic Signatures of Explosives Laboratory at the Transportation Security Laboratory is supporting the ANSI standards process via the creation of initial prototypes for round-robin testing with MMW imaging system manufacturers and experts. Results obtained for these prototypes will be used to inform the community and lead to consensus objective standards amongst stakeholders. Images collected with laboratory systems are presented along with results of preliminary image analysis. Future directions for object design, data collection and image processing are discussed.

  6. Saliency-aware food image segmentation for personal dietary assessment using a wearable computer

    USDA-ARS?s Scientific Manuscript database

    Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing h...

  7. Using endmembers in AVIRIS images to estimate changes in vegetative biomass

    NASA Technical Reports Server (NTRS)

    Smith, Milton O.; Adams, John B.; Ustin, Susan L.; Roberts, Dar A.

    1992-01-01

    Field techniques for estimating vegetative biomass are labor intensive, and rarely are used to monitor changes in biomass over time. Remote-sensing offers an attractive alternative to field measurements; however, because there is no simple correspondence between encoded radiance in multispectral images and biomass, it is not possible to measure vegetative biomass directly from AVIRIS images. Ways to estimate vegetative biomass by identifying community types and then applying biomass scalars derived from field measurements are investigated. Field measurements of community-scale vegetative biomass can be made, at least for local areas, but it is not always possible to identify vegetation communities unambiguously using remote measurements and conventional image-processing techniques. Furthermore, even when communities are well characterized in a single image, it typically is difficult to assess the extent and nature of changes in a time series of images, owing to uncertainties introduced by variations in illumination geometry, atmospheric attenuation, and instrumental responses. Our objective is to develop an improved method based on spectral mixture analysis to characterize and identify vegetative communities, that can be applied to multi-temporal AVIRIS and other types of images. In previous studies, multi-temporal data sets (AVIRIS and TM) of Owens Valley, CA were analyzed and vegetation communities were defined in terms of fractions of reference (laboratory and field) endmember spectra. An advantage of converting an image to fractions of reference endmembers is that, although fractions in a given pixel may vary from image to image in a time series, the endmembers themselves typically are constant, thus providing a consistent frame of reference.

  8. Investigation of radio astronomy image processing techniques for use in the passive millimetre-wave security screening environment

    NASA Astrophysics Data System (ADS)

    Taylor, Christopher T.; Hutchinson, Simon; Salmon, Neil A.; Wilkinson, Peter N.; Cameron, Colin D.

    2014-06-01

    Image processing techniques can be used to improve the cost-effectiveness of future interferometric Passive MilliMetre Wave (PMMW) imagers. The implementation of such techniques will allow for a reduction in the number of collecting elements whilst ensuring adequate image fidelity is maintained. Various techniques have been developed by the radio astronomy community to enhance the imaging capability of sparse interferometric arrays. The most prominent are Multi- Frequency Synthesis (MFS) and non-linear deconvolution algorithms, such as the Maximum Entropy Method (MEM) and variations of the CLEAN algorithm. This investigation focuses on the implementation of these methods in the defacto standard for radio astronomy image processing, the Common Astronomy Software Applications (CASA) package, building upon the discussion presented in Taylor et al., SPIE 8362-0F. We describe the image conversion process into a CASA suitable format, followed by a series of simulations that exploit the highlighted deconvolution and MFS algorithms assuming far-field imagery. The primary target application used for this investigation is an outdoor security scanner for soft-sided Heavy Goods Vehicles. A quantitative analysis of the effectiveness of the aforementioned image processing techniques is presented, with thoughts on the potential cost-savings such an approach could yield. Consideration is also given to how the implementation of these techniques in CASA might be adapted to operate in a near-field target environment. This may enable a much wider usability by the imaging community outside of radio astronomy and thus would be directly relevant to portal screening security systems in the microwave and millimetre wave bands.

  9. Applications of digital image processing techniques to problems of data registration and correlation

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview is presented of the evolution of the computer configuration at JPL's Image Processing Laboratory (IPL). The development of techniques for the geometric transformation of digital imagery is discussed and consideration is given to automated and semiautomated image registration, and the registration of imaging and nonimaging data. The increasing complexity of image processing tasks at IPL is illustrated with examples of various applications from the planetary program and earth resources activities. It is noted that the registration of existing geocoded data bases with Landsat imagery will continue to be important if the Landsat data is to be of genuine use to the user community.

  10. The ImageJ ecosystem: an open platform for biomedical image analysis

    PubMed Central

    Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  11. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  12. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    PubMed

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  13. The Hico Image Processing System: A Web-Accessible Hyperspectral Remote Sensing Toolbox

    NASA Astrophysics Data System (ADS)

    Harris, A. T., III; Goodman, J.; Justice, B.

    2014-12-01

    As the quantity of Earth-observation data increases, the use-case for hosting analytical tools in geospatial data centers becomes increasingly attractive. To address this need, HySpeed Computing and Exelis VIS have developed the HICO Image Processing System, a prototype cloud computing system that provides online, on-demand, scalable remote sensing image processing capabilities. The system provides a mechanism for delivering sophisticated image processing analytics and data visualization tools into the hands of a global user community, who will only need a browser and internet connection to perform analysis. Functionality of the HICO Image Processing System is demonstrated using imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), an imaging spectrometer located on the International Space Station (ISS) that is optimized for acquisition of aquatic targets. Example applications include a collection of coastal remote sensing algorithms that are directed at deriving critical information on water and habitat characteristics of our vulnerable coastal environment. The project leverages the ENVI Services Engine as the framework for all image processing tasks, and can readily accommodate the rapid integration of new algorithms, datasets and processing tools.

  14. MMX-I: A data-processing software for multi-modal X-ray imaging and tomography

    NASA Astrophysics Data System (ADS)

    Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.

    2017-06-01

    Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.

  15. Reference software implementation for GIFTS ground data processing

    NASA Astrophysics Data System (ADS)

    Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.

    2006-08-01

    Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.

  16. Generalizing on best practices in image processing: a model for promoting research integrity: Commentary on: Avoiding twisted pixels: ethical guidelines for the appropriate use and manipulation of scientific digital images.

    PubMed

    Benos, Dale J; Vollmer, Sara H

    2010-12-01

    Modifying images for scientific publication is now quick and easy due to changes in technology. This has created a need for new image processing guidelines and attitudes, such as those offered to the research community by Doug Cromey (Cromey 2010). We suggest that related changes in technology have simplified the task of detecting misconduct for journal editors as well as researchers, and that this simplification has caused a shift in the responsibility for reporting misconduct. We also argue that the concept of best practices in image processing can serve as a general model for education in best practices in research.

  17. MMX-I: data-processing software for multimodal X-ray imaging and tomography.

    PubMed

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-05-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.

  18. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  19. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    NASA Astrophysics Data System (ADS)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  20. Community detection for fluorescent lifetime microscopy image segmentation

    NASA Astrophysics Data System (ADS)

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Achilefu, Samuel; Nussinov, Zohar

    2014-03-01

    Multiresolution community detection (CD) method has been suggested in a recent work as an efficient method for performing unsupervised segmentation of fluorescence lifetime (FLT) images of live cell images containing fluorescent molecular probes.1 In the current paper, we further explore this method in FLT images of ex vivo tissue slices. The image processing problem is framed as identifying clusters with respective average FLTs against a background or "solvent" in FLT imaging microscopy (FLIM) images derived using NIR fluorescent dyes. We have identified significant multiresolution structures using replica correlations in these images, where such correlations are manifested by information theoretic overlaps of the independent solutions ("replicas") attained using the multiresolution CD method from different starting points. In this paper, our method is found to be more efficient than a current state-of-the-art image segmentation method based on mixture of Gaussian distributions. It offers more than 1:25 times diversity based on Shannon index than the latter method, in selecting clusters with distinct average FLTs in NIR FLIM images.

  1. LAND COVER ASSESSMENT OF INDIGENOUS COMMUNITIES IN THE BOSAWAS REGION OF NICARAGUA

    EPA Science Inventory


    Data derived from remotely sensed images were utilized to conduct land cover assessments of three indigenous communities in northern Nicaragua. Historical land use, present land cover and land cover change processes were all identified through the use of a geographic informat...

  2. funcLAB/G-service-oriented architecture for standards-based analysis of functional magnetic resonance imaging in HealthGrids.

    PubMed

    Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D

    2007-01-01

    Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.

  3. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  4. Potential medical applications of TAE

    NASA Technical Reports Server (NTRS)

    Fahy, J. Ben; Kaucic, Robert; Kim, Yongmin

    1986-01-01

    In cooperation with scientists in the University of Washington Medical School, a microcomputer-based image processing system for quantitative microscopy, called DMD1 (Digital Microdensitometer 1) was constructed. In order to make DMD1 transportable to different hosts and image processors, we have been investigating the possibility of rewriting the lower level portions of DMD1 software using Transportable Applications Executive (TAE) libraries and subsystems. If successful, we hope to produce a newer version of DMD1, called DMD2, running on an IBM PC/AT under the SCO XENIX System 5 operating system, using any of seven target image processors available in our laboratory. Following this implementation, copies of the system will be transferred to other laboratories with biomedical imaging applications. By integrating those applications into DMD2, we hope to eventually expand our system into a low-cost general purpose biomedical imaging workstation. This workstation will be useful not only as a self-contained instrument for clinical or research applications, but also as part of a large scale Digital Imaging Network and Picture Archiving and Communication System, (DIN/PACS). Widespread application of these TAE-based image processing and analysis systems should facilitate software exchange and scientific cooperation not only within the medical community, but between the medical and remote sensing communities as well.

  5. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  6. Image analysis driven single-cell analytics for systems microbiology.

    PubMed

    Balomenos, Athanasios D; Tsakanikas, Panagiotis; Aspridou, Zafiro; Tampakaki, Anastasia P; Koutsoumanis, Konstantinos P; Manolakos, Elias S

    2017-04-04

    Time-lapse microscopy is an essential tool for capturing and correlating bacterial morphology and gene expression dynamics at single-cell resolution. However state-of-the-art computational methods are limited in terms of the complexity of cell movies that they can analyze and lack of automation. The proposed Bacterial image analysis driven Single Cell Analytics (BaSCA) computational pipeline addresses these limitations thus enabling high throughput systems microbiology. BaSCA can segment and track multiple bacterial colonies and single-cells, as they grow and divide over time (cell segmentation and lineage tree construction) to give rise to dense communities with thousands of interacting cells in the field of view. It combines advanced image processing and machine learning methods to deliver very accurate bacterial cell segmentation and tracking (F-measure over 95%) even when processing images of imperfect quality with several overcrowded colonies in the field of view. In addition, BaSCA extracts on the fly a plethora of single-cell properties, which get organized into a database summarizing the analysis of the cell movie. We present alternative ways to analyze and visually explore the spatiotemporal evolution of single-cell properties in order to understand trends and epigenetic effects across cell generations. The robustness of BaSCA is demonstrated across different imaging modalities and microscopy types. BaSCA can be used to analyze accurately and efficiently cell movies both at a high resolution (single-cell level) and at a large scale (communities with many dense colonies) as needed to shed light on e.g. how bacterial community effects and epigenetic information transfer play a role on important phenomena for human health, such as biofilm formation, persisters' emergence etc. Moreover, it enables studying the role of single-cell stochasticity without losing sight of community effects that may drive it.

  7. [Structural Equation Modeling for Public Hospital Quality of Care, Image, Role Performance, Satisfaction, Intent to (Re)visit, and Intent to Recommend Hospital as Perceived by Community Residents].

    PubMed

    Hwang, Eun Jeong; Sim, In Ok

    2016-02-01

    The study purposes were to construct and test structural equation modeling on the causal relationship of community residents' perceived quality of care, image, and role performance with satisfaction, intention to (re)visit and intention to recommend hospital. A cross-sectional survey was conducted with 3,900 community residents from 39 district public hospitals. The questionnaire was designed to collected information on personal characteristics and community awareness of public hospitals. Community awareness consisted of 6 factors and 18 items. The data were collected utilizing call-interview by a survey company. Research data were collected via questionnaires and analyzed using SPSS version 20.0 and AMOS version 20.0. Model fit indices for the hypothetical model were suitable for the recommended level: χ²=796.40 (df=79, p<.001), GFI=.93, AGFI=.90, RMSR=.08, NFI=.94. Quality of care, image, and role performance explained 68.1% of variance in community awareness. Total effect of quality of care process factors on satisfaction (path coefficients=3.67), intention to (re)visit (path coefficients=2.67) and intention to recommend hospital (coefficients=2.45) were higher than other factors. Findings show that public hospitals have to make an effort to improve community image through the provision of quality care, and excellent role performance. Support for these activities is available from both Central and Local Governments.

  8. An Ecometric Study of Recent Microfossils using High-throughput Imaging

    NASA Astrophysics Data System (ADS)

    Elder, L. E.; Hull, P. M.; Hsiang, A. Y.; Kahanamoku, S.

    2016-02-01

    The era of Big Data has ushered in the potential to collect population level information in a manageable time frame. Taxon-free morphological trait analysis, referred to as ecometrics, can be used to examine and compare ecological dynamics between communities with entirely different species compositions. Until recently population level studies of morphology were difficult because of the time intensive task of collecting measurements. To overcome this, we implemented advances in imaging technology and created software to automate measurements. This high-throughput set of methods collects assemblage-scale data, with methods tuned to foraminiferal samples (e.g., light objects on a dark background). Methods include serial focused dark-field microscopy, custom software (Automorph) to batch process images, extract 2D and 3D shape parameters and frames, and implement landmark-free geometric morphometric analyses. Informatics pipelines were created to store, catalog and share images through the Yale Peabody Museum(YPM; peabody.yale.edu). We openly share software and images to enhance future data discovery. In less than a year we have generated over 25TB of high resolution semi 3D images for this initial study. Here, we take the first step towards developing ecometric approaches for open ocean microfossil communities with a calibration study of community shape in recent sediments. We will present an overview of the `shape' of modern planktonic foraminiferal communities from 25 Atlantic core top samples (23 sites in the North and Equatorial Atlantic; 2 sites in the South Atlantic). In total, more than 100,000 microfossils and fragments were imaged from these sites' sediment cores, an unprecedented morphometric sample set. Correlates of community shape, including diversity, temperature, and latitude, will be discussed. These methods have also been applied to images of limpets and fish teeth to date, and have the potential to be used on modern taxa to extract meaningful information on community responses to changing climate.

  9. WFIRST Science Operations at STScI

    NASA Astrophysics Data System (ADS)

    Gilbert, Karoline; STScI WFIRST Team

    2018-06-01

    With sensitivity and resolution comparable the Hubble Space Telescope, and a field of view 100 times larger, the Wide Field Instrument (WFI) on WFIRST will be a powerful survey instrument. STScI will be the Science Operations Center (SOC) for the WFIRST Mission, with additional science support provided by the Infrared Processing and Analysis Center (IPAC) and foreign partners. STScI will schedule and archive all WFIRST observations, calibrate and produce pipeline-reduced data products for imaging with the Wide Field Instrument, support the High Latitude Imaging and Supernova Survey Teams, and support the astronomical community in planning WFI imaging observations and analyzing the data. STScI has developed detailed concepts for WFIRST operations, including a data management system integrating data processing and the archive which will include a novel, cloud-based framework for high-level data processing, providing a common environment accessible to all users (STScI operations, Survey Teams, General Observers, and archival investigators). To aid the astronomical community in examining the capabilities of WFIRST, STScI has built several simulation tools. We describe the functionality of each tool and give examples of its use.

  10. ClearedLeavesDB: an online database of cleared plant leaf images

    PubMed Central

    2014-01-01

    Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985

  11. ClearedLeavesDB: an online database of cleared plant leaf images.

    PubMed

    Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S

    2014-03-28

    Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.

  12. Application of advanced signal processing techniques to the rectification and registration of spaceborne imagery. [technology transfer, data transmission

    NASA Technical Reports Server (NTRS)

    Caron, R. H.; Rifman, S. S.; Simon, K. W.

    1974-01-01

    The development of an ERTS/MSS image processing system responsive to the needs of the user community is discussed. An overview of the TRW ERTS/MSS processor is presented, followed by a more detailed discussion of image processing functions satisfied by the system. The particular functions chosen for discussion are evolved from advanced signal processing techniques rooted in the areas of communication and control. These examples show how classical aerospace technology can be transferred to solve the more contemporary problems confronting the users of spaceborne imagery.

  13. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    PubMed

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  14. A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.

    1993-01-01

    A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.

  15. Research@ARL. Imaging & Image Processing. Volume 3, Issue 1

    DTIC Science & Technology

    2014-01-01

    goal, the focal plane arrays (FPAs) the Army deploys must excel in all areas of performance including thermal sensitivity, image resolution, speed of...are available only in relatively small sizes. Further, the difference in thermal expansion coefficients between a CZT substrate and its silicon (Si...read-out integrated circuitry reduces the reliability of large format FPAs due to repeated thermal cycling. Some in the community believed this

  16. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  17. Sustainable Land Imaging User Requirements

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Snyder, G.; Vadnais, C. M.

    2017-12-01

    The US Geological Survey (USGS) Land Remote Sensing Program (LRSP) has collected user requirements from a range of applications to help formulate the Landsat 9 follow-on mission (Landsat 10) through the Requirements, Capabilities and Analysis (RCA) activity. The USGS is working with NASA to develop Landsat 10, which is scheduled to launch in the 2027 timeframe as part of the Sustainable Land Imaging program. User requirements collected through RCA will help inform future Landsat 10 sensor designs and mission characteristics. Current Federal civil community users have provided hundreds of requirements through systematic, in-depth interviews. Academic, State, local, industry, and international Landsat user community input was also incorporated in the process. Emphasis was placed on spatial resolution, temporal revisit, and spectral characteristics, as well as other aspects such as accuracy, continuity, sampling condition, data access and format. We will provide an overview of the Landsat 10 user requirements collection process and summary results of user needs from the broad land imagining community.

  18. From Wheatstone to Cameron and beyond: overview in 3-D and 4-D imaging technology

    NASA Astrophysics Data System (ADS)

    Gilbreath, G. Charmaine

    2012-02-01

    This paper reviews three-dimensional (3-D) and four-dimensional (4-D) imaging technology, from Wheatstone through today, with some prognostications for near future applications. This field is rich in variety, subject specialty, and applications. A major trend, multi-view stereoscopy, is moving the field forward to real-time wide-angle 3-D reconstruction as breakthroughs in parallel processing and multi-processor computers enable very fast processing. Real-time holography meets 4-D imaging reconstruction at the goal of achieving real-time, interactive, 3-D imaging. Applications to telesurgery and telemedicine as well as to the needs of the defense and intelligence communities are also discussed.

  19. Imaging and the new biology: What's wrong with this picture?

    NASA Astrophysics Data System (ADS)

    Vannier, Michael W.

    2004-05-01

    The Human Genome has been defined, giving us one part of the equation that stems from the central dogma of molecular biology. Despite this awesome scientific achievement, the correspondence between genomics and imaging is weak, since we cannot predict an organism's phenotype from even perfect knowledge of its genetic complement. Biological knowledge comes in several forms, and the genome is perhaps the best known and most completely understood type. Imaging creates another form of biological information, providing the ability to study morphology, growth and development, metabolic processes, and diseases in vitro and in vivo at many levels of scale. The principal challenge in biomedical imaging for the future lies in the need to reconcile the data provided by one or multiple modalities with other forms of biological knowledge, most importantly the genome, proteome, physiome, and other "-ome's." To date, the imaging science community has not set a high priority on the unification of their results with genomics, proteomics, and physiological functions in most published work. Images are relatively isolated from other forms of biological data, impairing our ability to conceive and address many fundamental questions in research and clinical practice. This presentation will explain the challenge of biological knowledge integration in basic research and clinical applications from the standpoint of imaging and image processing. The impediments to progress, isolation of the imaging community, and mainstream of new and future biological science will be identified, so the critical and immediate need for change can be highlighted.

  20. Capillary absorption spectrometer and process for isotopic analysis of small samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The process also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  1. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  2. TheHiveDB image data management and analysis framework.

    PubMed

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-06

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative.

  3. TheHiveDB image data management and analysis framework

    PubMed Central

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-01

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative. PMID:24432000

  4. Novel wavelength diversity technique for high-speed atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Arrasmith, William W.; Sullivan, Sean F.

    2010-04-01

    The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.

  5. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    PubMed

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-06-01

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  6. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. [Migrants' female partners: social image and the search for sexual and reproductive health services].

    PubMed

    Ochoa-Marín, Sandra C; Cristancho-Marulanda, Sergio; González-López, José Rafael

    2011-04-01

    Analysing the self-image and social image of migrants' female partners (MFP) and their relationship with the search for sexual and reproductive health services (SRHS) in communities having a high US migratory intensity index. 60 MFP were subjected to in-depth interviews between October 2004 and May 2005 and 19 semi-structured interviews were held with members of their families, 14 representatives from social organisations, 10 health service representatives and 31 men and women residing in the community. MFP self-image and social image regards women as being "vulnerable", "alone", "lacking sexual partner" and thus being sexually inactive. Consequently, "they must not contract sexually-transmitted diseases (STD), use contraceptives or become pregnant" when their partners are in the USA. The search for SRHS services was found to be related to self-image, social image and the notion of family or social control predominated in the behaviour expected for these women which, in turn, was related to conditions regarding their coexistence (or not) with their families. MFP living with their family or their partner's family were subject to greater "family" control in their search for SRHS services. On the contrary, MFP living alone were subjected to greater "social" control over such process. Sexuallyinactive women's self-image and social image seems to have a bearing on such women's social behaviour and could become an obstacle to the timely search for SRHS services in communities having high migratory intensity.

  8. Open Science CBS Neuroimaging Repository: Sharing ultra-high-field MR images of the brain.

    PubMed

    Tardif, Christine Lucas; Schäfer, Andreas; Trampel, Robert; Villringer, Arno; Turner, Robert; Bazin, Pierre-Louis

    2016-01-01

    Magnetic resonance imaging at ultra high field opens the door to quantitative brain imaging at sub-millimeter isotropic resolutions. However, novel image processing tools to analyze these new rich datasets are lacking. In this article, we introduce the Open Science CBS Neuroimaging Repository: a unique repository of high-resolution and quantitative images acquired at 7 T. The motivation for this project is to increase interest for high-resolution and quantitative imaging and stimulate the development of image processing tools developed specifically for high-field data. Our growing repository currently includes datasets from MP2RAGE and multi-echo FLASH sequences from 28 and 20 healthy subjects respectively. These datasets represent the current state-of-the-art in in-vivo relaxometry at 7 T, and are now fully available to the entire neuroimaging community. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. AstroCV: Astronomy computer vision library

    NASA Astrophysics Data System (ADS)

    González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.

    2018-04-01

    AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.

  10. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    PubMed Central

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  11. Pc-Based Floating Point Imaging Workstation

    NASA Astrophysics Data System (ADS)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  12. Generative Adversarial Networks: An Overview

    NASA Astrophysics Data System (ADS)

    Creswell, Antonia; White, Tom; Dumoulin, Vincent; Arulkumaran, Kai; Sengupta, Biswa; Bharath, Anil A.

    2018-01-01

    Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.

  13. Process evaluation of the Intervention with Microfinance for AIDS and Gender Equity (IMAGE) in rural South Africa.

    PubMed

    Hargreaves, James; Hatcher, Abigail; Strange, Vicki; Phetla, Godfrey; Busza, Joanna; Kim, Julia; Watts, Charlotte; Morison, Linda; Porter, John; Pronyk, Paul; Bonell, Christopher

    2010-02-01

    The Intervention with Microfinance for AIDS and Gender Equity (IMAGE) combines microfinance, gender/HIV training and community mobilization (CM) in South Africa. A trial found reduced intimate partner violence among clients but less evidence for impact on sexual behaviour among clients' households or communities. This process evaluation examined how feasible IMAGE was to deliver and how accessible and acceptable it was to intended beneficiaries during a trial and subsequent scale-up. Data came from attendance registers, financial records, observations, structured questionnaires (378) and focus group discussions and interviews (128) with clients and staff. Gender/HIV training and CM were managed initially by an academic unit ('linked' model) and later by the microfinance institution (MFI) ('parallel' model). Microfinance and gender/HIV training were feasible to deliver and accessible and acceptable to most clients. Though participation in CM was high for some clients, others experienced barriers to collective action, a finding which may help explain lack of intervention effects among household/community members. Delivery was feasible in the short term but both models were considered unsustainable in the longer term. A linked model involving a MFI and a non-academic partner agency may be more sustainable and is being tried. Feasible models for delivering microfinance and health promotion require further investigation.

  14. Junocam: Juno's Outreach Camera

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  15. Electronic Photography at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack; Judge, Nancianne

    1995-01-01

    An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.

  16. The Auroral Planetary Imaging and Spectroscopy (APIS) service

    NASA Astrophysics Data System (ADS)

    Lamy, L.; Prangé, R.; Henry, F.; Le Sidaner, P.

    2015-06-01

    The Auroral Planetary Imaging and Spectroscopy (APIS) service, accessible online, provides an open and interactive access to processed auroral observations of the outer planets and their satellites. Such observations are of interest for a wide community at the interface between planetology, magnetospheric and heliospheric physics. APIS consists of (i) a high level database, built from planetary auroral observations acquired by the Hubble Space Telescope (HST) since 1997 with its mostly used Far-Ultraviolet spectro-imagers, (ii) a dedicated search interface aimed at browsing efficiently this database through relevant conditional search criteria and (iii) the ability to interactively work with the data online through plotting tools developed by the Virtual Observatory (VO) community, such as Aladin and Specview. This service is VO compliant and can therefore also been queried by external search tools of the VO community. The diversity of available data and the capability to sort them out by relevant physical criteria shall in particular facilitate statistical studies, on long-term scales and/or multi-instrumental multi-spectral combined analysis.

  17. Community College Image--By Hollywood

    ERIC Educational Resources Information Center

    Tucciarone, Krista M.

    2007-01-01

    This qualitative study analyzes how the most recent community college film, Evolution (2001) depicts and portrays the image of a community college as interpreted by attending community college students. Previous community college research suggests that college choice, enrollment, and funding may be affected by perceived image. Image is greatly…

  18. A neotropical Miocene pollen database employing image-based search and semantic modeling.

    PubMed

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-08-01

    Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  19. System for Contributing and Discovering Derived Mission and Science Data

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Powell, Mark W.; Shams, Khawaja S.; Mickelson, Megan C.; Ohata, Darrick M.; Kurien, James A.; Abramyan, Luch

    2013-01-01

    A system was developed to provide a new mechanism for members of the mission community to create and contribute new science data to the rest of the community. Mission tools have allowed members of the mission community to share first order data (data that is created by the mission s process in command and control of the spacecraft or the data that is captured by the craft itself, like images, science results, etc.). However, second and higher order data (data that is created after the fact by scientists and other members of the mission) was previously not widely disseminated, nor did it make its way into the mission planning process.

  20. Coherent diffractive imaging methods for semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Helfenstein, Patrick; Mochi, Iacopo; Rajeev, Rajendran; Fernandez, Sara; Ekinci, Yasin

    2017-12-01

    The paradigm shift of the semiconductor industry moving from deep ultraviolet to extreme ultraviolet lithography (EUVL) brought about new challenges in the fabrication of illumination and projection optics, which constitute one of the core sources of cost of ownership for many of the metrology tools needed in the lithography process. For this reason, lensless imaging techniques based on coherent diffractive imaging started to raise interest in the EUVL community. This paper presents an overview of currently on-going research endeavors that use a number of methods based on lensless imaging with coherent light.

  1. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  2. Theatre of the oppressed and environmental justice communities: a transformational therapy for the body politic.

    PubMed

    Sullivan, John; Petronella, Sharon; Brooks, Edward; Murillo, Maria; Primeau, Loree; Ward, Jonathan

    2008-03-01

    Community Environmental Forum Theatre at UTMB-NIEHS Center in Environmental Toxicology uses Augusto Boal's Theatre of the Oppressed (TO) to promote involvement of citizens, scientists, and health professionals in deconstructing toxic exposures, risk factors, and cumulative stressors that impact the well-being of communities. The TO process encourages collective empowerment of communities by disseminating information and elaborating support networks. TO also elicits transformation and growth on a personal level via a dramaturgical system that restores spontaneity through image-making and improvisation. An NIEHS Environmental Justice Project, Communities Organized against Asthma & Lead, illustrates this interplay of personal and collective change in Houston, Texas.

  3. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  4. Youpi: YOUr processing PIpeline

    NASA Astrophysics Data System (ADS)

    Monnerville, Mathias; Sémah, Gregory

    2012-03-01

    Youpi is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. Built on top of various open source reduction tools released to the community by TERAPIX (http://terapix.iap.fr), Youpi can help organize data, manage processing jobs on a computer cluster in real time (using Condor) and facilitate teamwork by allowing fine-grain sharing of results and data. Youpi is modular and comes with plugins which perform, from within a browser, various processing tasks such as evaluating the quality of incoming images (using the QualityFITS software package), computing astrometric and photometric solutions (using SCAMP), resampling and co-adding FITS images (using SWarp) and extracting sources and building source catalogues from astronomical images (using SExtractor). Youpi is useful for small to medium-sized data reduction projects; it is free and is published under the GNU General Public License.

  5. Innovation contests to promote sexual health in China: a qualitative evaluation.

    PubMed

    Zhang, Wei; Schaffer, David; Tso, Lai Sze; Tang, Songyuan; Tang, Weiming; Huang, Shujie; Yang, Bin; Tucker, Joseph D

    2017-01-14

    Innovation contests call on non-experts to help solve problems. While these contests have been used extensively in the private sector to increase engagement between organizations and clients, there is little data on the role of innovation contests to promote health campaigns. We implemented an innovation contest in China to increase sexual health awareness among youth and evaluated community engagement in the contest. The sexual health image contest consisted of an open call for sexual health images, contest promotion activities, judging of entries, and celebrating contributions. Contest promotion activities included in-person and social media feedback, classroom didactics, and community-driven activities. We conducted 19 semi-structured interviews with a purposive sample to ensure a range of participant scores, experts and non-expert participants, submitters and non-submitters. Transcripts of each interview were coded with Atlas.ti and evaluated by three reviewers. We identified stages of community engagement in the contest which contributed to public health impact. Community engagement progressed across a continuum from passive, moderate, active, and finally strong engagement. Engagement was a dynamic process that appeared to have little relationship with formally submitting an image to the contest. Among non-expert participants, contest engagement increased knowledge, healthy attitudes, and empowered participants to share ideas about safe sex with others outside of the contest. Among experts who helped organize the contest, the process of implementing the contest fostered multi-sectoral collaboration and re-oriented public health leadership towards more patient-centered public health campaigns. The results of this study suggest that innovation contests may be a useful tool for public health promotion by enhancing community engagement and re-orienting health campaigns to make them more patient-centered.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halsted, Michelle; Wilmoth, Jared L.; Briggs, Paige A.

    Microbial communities are incredibly complex systems that dramatically and ubiquitously influence our lives. They help to shape our climate and environment, impact agriculture, drive business, and have a tremendous bearing on healthcare and physical security. Spatial confinement, as well as local variations in physical and chemical properties, affects development and interactions within microbial communities that occupy critical niches in the environment. Recent work has demonstrated the use of silicon based microwell arrays, combined with parylene lift-off techniques, to perform both deterministic and stochastic assembly of microbial communities en masse, enabling the high-throughput screening of microbial communities for their response tomore » growth in confined environments under different conditions. The implementation of a transparent microwell array platform can expand and improve the imaging modalities that can be used to characterize these assembled communities. In this paper, the fabrication and characterization of a next generation transparent microwell array is described. The transparent arrays, comprised of SU-8 patterned on a glass coverslip, retain the ability to use parylene lift-off by integrating a low temperature atomic layer deposition of silicon dioxide into the fabrication process. This silicon dioxide layer prevents adhesion of the parylene material to the patterned SU-8, facilitating dry lift-off, and maintaining the ability to easily assemble microbial communities within the microwells. These transparent microwell arrays can screen numerous community compositions using continuous, high resolution, imaging. Finally, the utility of the design was successfully demonstrated through the stochastic seeding and imaging of green fluorescent protein expressing Escherichia coli using both fluorescence and brightfield microscopies.« less

  7. EPS in Environmental Microbial Biofilms as Examined by Advanced Imaging Techniques

    NASA Astrophysics Data System (ADS)

    Neu, T. R.; Lawrence, J. R.

    2006-12-01

    Biofilm communities are highly structured associations of cellular and polymeric components which are involved in biogenic and geogenic environmental processes. Furthermore, biofilms are also important in medical (infection), industrial (biofouling) and technological (biofilm engineering) processes. The interfacial microbial communities in a specific habitat are highly dynamic and change according to the environmental parameters affecting not only the cellular but also the polymeric constituents of the system. Through their EPS biofilms interact with dissolved, colloidal and particulate compounds from the bulk water phase. For a long time the focus in biofilm research was on the cellular constituents in biofilms and the polymer matrix in biofilms has been rather neglected. The polymer matrix is produced not only by different bacteria and archaea but also by eukaryotic micro-organisms such as algae and fungi. The mostly unidentified mixture of EPS compounds is responsible for many biofilm properties and is involved in biofilm functionality. The chemistry of the EPS matrix represents a mixture of polymers including polysaccharides, proteins, nucleic acids, neutral polymers, charged polymers, amphiphilic polymers and refractory microbial polymers. The analysis of the EPS may be done destructively by means of extraction and subsequent chemical analysis or in situ by means of specific probes in combination with advanced imaging. In the last 15 years laser scanning microscopy (LSM) has been established as an indispensable technique for studying microbial communities. LSM with 1-photon and 2-photon excitation in combination with fluorescence techniques allows 3-dimensional investigation of fully hydrated, living biofilm systems. This approach is able to reveal data on biofilm structural features as well as biofilm processes and interactions. The fluorescent probes available allow the quantitative assessment of cellular as well as polymer distribution. For this purpose lectin-binding- analysis has been suggested as a suitable approach to image glycoconjugates within the polymer matrix of biofilm communities. More recently synchrotron radiation is increasingly recognized as a powerful tool for studying biological samples. Hard X-ray excitation can be used to map elemental composition whereas IR imaging allows examination of biological macromolecules. A further technique called soft X-ray scanning transmission microscopy (STXM) has the advantage of both techniques and may be employed to detect elements as well as biomolecules. Using the appropriate spectra, near edge X-ray absorption fine structure (NEXAFS) microscopy allows quantitative chemical mapping at 50 nm resolution. In this presentation the applicability of LSM and STXM will be demonstrated using several examples of different environmental biofilm systems. The techniques in combination provide a new view of complex microbial communities and their interaction with the environment. These advanced imaging techniques offer the possibility to study the spatial structure of cellular and polymeric compounds in biofilms as well as biofilm microhabitats, biofilm functionality and biofilm processes.

  8. End-to-end performance analysis using engineering confidence models and a ground processor prototype

    NASA Astrophysics Data System (ADS)

    Kruse, Klaus-Werner; Sauer, Maximilian; Jäger, Thomas; Herzog, Alexandra; Schmitt, Michael; Huchler, Markus; Wallace, Kotska; Eisinger, Michael; Heliere, Arnaud; Lefebvre, Alain; Maher, Mat; Chang, Mark; Phillips, Tracy; Knight, Steve; de Goeij, Bryan T. G.; van der Knaap, Frits; Van't Hof, Adriaan

    2015-10-01

    The European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) are co-operating to develop the EarthCARE satellite mission with the fundamental objective of improving the understanding of the processes involving clouds, aerosols and radiation in the Earth's atmosphere. The EarthCARE Multispectral Imager (MSI) is relatively compact for a space borne imager. As a consequence, the immediate point-spread function (PSF) of the instrument will be mainly determined by the diffraction caused by the relatively small optical aperture. In order to still achieve a high contrast image, de-convolution processing is applied to remove the impact of diffraction on the PSF. A Lucy-Richardson algorithm has been chosen for this purpose. This paper will describe the system setup and the necessary data pre-processing and post-processing steps applied in order to compare the end-to-end image quality with the L1b performance required by the science community.

  9. Recent Advances in Techniques for Hyperspectral Image Processing

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; hide

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  10. Youpi: A Web-based Astronomical Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Monnerville, M.; Sémah, G.

    2010-12-01

    Youpi stands for “YOUpi is your processing PIpeline”. It is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. It is built on top of open source processing tools that are released to the community by Terapix, in order to organize your data on a computer cluster, to manage your processing jobs in real time and to facilitate teamwork by allowing fine-grain sharing of results and data. On the server side, Youpi is written in the Python programming language and uses the Django web framework. On the client side, Ajax techniques are used along with the Prototype and script.aculo.us Javascript librairies.

  11. Sentinel-2 for rapid operational landslide inventory mapping

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Marc, Odin; Malet, Jean-Philippe; Michea, David

    2017-04-01

    Landslide inventory mapping after major triggering events such as heavy rainfalls or earthquakes is crucial for disaster response, the assessment of hazards, and the quantification of sediment budgets and empirical scaling laws. Numerous studies have already demonstrated the utility of very-high resolution satellite and aerial images for the elaboration of inventories based on semi-automatic methods or visual image interpretation. Nevertheless, such semi-automatic methods are rarely used in an operational context after major triggering events; this is partly due to access limitations on the required input datasets (i.e. VHR satellite images) and to the absence of dedicated services (i.e. processing chain) available for the landslide community. Several on-going initiatives allow to overcome these limitations. First, from a data perspective, the launch of the Sentinel-2 mission offers opportunities for the design of an operational service that can be deployed for landslide inventory mapping at any time and everywhere on the globe. Second, from an implementation perspective, the Geohazards Exploitation Platform (GEP) of the European Space Agency (ESA) allows the integration and diffusion of on-line processing algorithms in a high computing performance environment. Third, from a community perspective, the recently launched Landslide Pilot of the Committee on Earth Observation Satellites (CEOS), has targeted the take-off of such service as a main objective for the landslide community. Within this context, this study targets the development of a largely automatic, supervised image processing chain for landslide inventory mapping from bi-temporal (before and after a given event) Sentinel-2 optical images. The processing chain combines change detection methods, image segmentation, higher-level image features (e.g. texture, shape) and topographic variables. Based on a few representative examples provided by a human operator, a machine learning model is trained and subsequently used to distinguish newly triggered landslides from other landscape elements. The final map product is provided along with an uncertainty map that allows identifying areas which might require further considerations. The processing chain is tested for two recent and contrasted triggering events in New Zealand and Taiwan. A Mw 7.8 earthquake in New Zealand in November 2016 triggered tens of thousands of landslides in a complex environment, with important textural variations with elevations, due to vegetation change and snow cover. In contrast a large but unexceptional typhoon in July 2016 in Taiwan triggered a moderate amount of relatively small landslides in a lushly vegetated, more homogenous terrain. Based on the obtained results we discuss the potential and limitations of Sentinel-2 bi-temporal images and time-series for operational landslide inventory mapping This work is part of the General Studies Program (GSP) ALCANTARA of ESA.

  12. Analysis of Non Local Image Denoising Methods

    NASA Astrophysics Data System (ADS)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  13. IMAGESEER - IMAGEs for Education and Research

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara

    2012-01-01

    IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.

  14. Development of transparent microwell arrays for optical monitoring and dissection of microbial communities

    DOE PAGES

    Halsted, Michelle; Wilmoth, Jared L.; Briggs, Paige A.; ...

    2016-09-29

    Microbial communities are incredibly complex systems that dramatically and ubiquitously influence our lives. They help to shape our climate and environment, impact agriculture, drive business, and have a tremendous bearing on healthcare and physical security. Spatial confinement, as well as local variations in physical and chemical properties, affects development and interactions within microbial communities that occupy critical niches in the environment. Recent work has demonstrated the use of silicon based microwell arrays, combined with parylene lift-off techniques, to perform both deterministic and stochastic assembly of microbial communities en masse, enabling the high-throughput screening of microbial communities for their response tomore » growth in confined environments under different conditions. The implementation of a transparent microwell array platform can expand and improve the imaging modalities that can be used to characterize these assembled communities. In this paper, the fabrication and characterization of a next generation transparent microwell array is described. The transparent arrays, comprised of SU-8 patterned on a glass coverslip, retain the ability to use parylene lift-off by integrating a low temperature atomic layer deposition of silicon dioxide into the fabrication process. This silicon dioxide layer prevents adhesion of the parylene material to the patterned SU-8, facilitating dry lift-off, and maintaining the ability to easily assemble microbial communities within the microwells. These transparent microwell arrays can screen numerous community compositions using continuous, high resolution, imaging. Finally, the utility of the design was successfully demonstrated through the stochastic seeding and imaging of green fluorescent protein expressing Escherichia coli using both fluorescence and brightfield microscopies.« less

  15. GREENPLEX -- A SUSTAINABLE URBAN FORM FOR THE 21ST CENTURY

    EPA Science Inventory

    Outputs include images of architecture, space usage, social design, elevators, skybridges, ETFE envelope, structures, construction process, HVAC system, and water system.  Outputs include performance metrics for the University Community Greenplex and traditional univer...

  16. Statistical model for speckle pattern optimization.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren

    2017-11-27

    Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.

  17. IEEE International Symposium on Biomedical Imaging.

    PubMed

    2017-01-01

    The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.

  18. 3D-resolved fluorescence and phosphorescence lifetime imaging using temporal focusing wide-field two-photon excitation

    PubMed Central

    Choi, Heejin; Tzeranis, Dimitrios S.; Cha, Jae Won; Clémenceau, Philippe; de Jong, Sander J. G.; van Geest, Lambertus K.; Moon, Joong Ho; Yannas, Ioannis V.; So, Peter T. C.

    2012-01-01

    Fluorescence and phosphorescence lifetime imaging are powerful techniques for studying intracellular protein interactions and for diagnosing tissue pathophysiology. While lifetime-resolved microscopy has long been in the repertoire of the biophotonics community, current implementations fall short in terms of simultaneously providing 3D resolution, high throughput, and good tissue penetration. This report describes a new highly efficient lifetime-resolved imaging method that combines temporal focusing wide-field multiphoton excitation and simultaneous acquisition of lifetime information in frequency domain using a nanosecond gated imager from a 3D-resolved plane. This approach is scalable allowing fast volumetric imaging limited only by the available laser peak power. The accuracy and performance of the proposed method is demonstrated in several imaging studies important for understanding peripheral nerve regeneration processes. Most importantly, the parallelism of this approach may enhance the imaging speed of long lifetime processes such as phosphorescence by several orders of magnitude. PMID:23187477

  19. The imaging node for the Planetary Data System

    USGS Publications Warehouse

    Eliason, E.M.; LaVoie, S.K.; Soderblom, L.A.

    1996-01-01

    The Planetary Data System Imaging Node maintains and distributes the archives of planetary image data acquired from NASA's flight projects with the primary goal of enabling the science community to perform image processing and analysis on the data. The Node provides direct and easy access to the digital image archives through wide distribution of the data on CD-ROM media and on-line remote-access tools by way of Internet services. The Node provides digital image processing tools and the expertise and guidance necessary to understand the image collections. The data collections, now approaching one terabyte in volume, provide a foundation for remote sensing studies for virtually all the planetary systems in our solar system (except for Pluto). The Node is responsible for restoring data sets from past missions in danger of being lost. The Node works with active flight projects to assist in the creation of their archive products and to ensure that their products and data catalogs become an integral part of the Node's data collections.

  20. Detecting the changes in rural communities in Taiwan by applying multiphase segmentation on FORMOSA-2 satellite imagery

    NASA Astrophysics Data System (ADS)

    Huang, Yishuo

    2015-09-01

    Agricultural activities mainly occur in rural areas; recently, ecological conservation and biological diversity are being emphasized in rural communities to promote sustainable development for rural communities, especially for rural communities in Taiwan. Therefore, since 2005, many rural communities in Taiwan have compiled their own development strategies in order to create their own unique characteristics to attract people to visit and stay in rural communities. By implementing these strategies, young people can stay in their own rural communities and the rural communities are rejuvenated. However, some rural communities introduce artificial construction into the community such that the ecological and biological environments are significantly degraded. The strategies need to be efficiently monitored because up to 67 rural communities have proposed rejuvenation projects. In 2015, up to 440 rural communities were estimated to be involved in rural community rejuvenations. How to monitor the changes occurring in those rural communities participating in rural community rejuvenation such that ecological conservation and ecological diversity can be satisfied is an important issue in rural community management. Remote sensing provides an efficient and rapid method to achieve this issue. Segmentation plays a fundamental role in human perception. In this respect, segmentation can be used as the process of transforming the collection of pixels of an image into a group of regions or objects with meaning. This paper proposed an algorithm based on the multiphase approach to segment the normalized difference vegetation index, NDVI, of the rural communities into several sub-regions, and to have the NDVI distribution in each sub-region be homogeneous. Those regions whose values of NDVI are close will be merged into the same class. In doing so, a complex NDVI map can be simplified into two groups: the high and low values of NDVI. The class with low NDVI values corresponds to those regions containing roads, buildings, and other manmade construction works and the class with high values of NDVI indicates that those regions contain vegetation in good health. In order to verify the processed results, the regional boundaries were extracted and laid down on the given images to check whether the extracted boundaries were laid down on buildings, roads, or other artificial constructions. In addition to the proposed approach, another approach called statistical region merging was employed by grouping sets of pixels with homogeneous properties such that those sets are iteratively grown by combining smaller regions or pixels. In doing so, the segmented NDVI map can be generated. By comparing the areas of the merged classes in different years, the changes occurring in the rural communities of Taiwan can be detected. The satellite imagery of FORMOSA-2 with 2-m ground resolution is employed to evaluate the performance of the proposed approach. The satellite imagery of two rural communities (Jhumen and Taomi communities) is chosen to evaluate environmental changes between 2005 and 2010. The change maps of 2005-2010 show that a high density of green on a patch of land is increased by 19.62 ha in Jhumen community and conversely a similar patch of land is significantly decreased by 236.59 ha in Taomi community. Furthermore, the change maps created by another image segmentation method called statistical region merging generate similar processed results to multiphase segmentation.

  1. JunoCam: A Public Endeavor

    NASA Astrophysics Data System (ADS)

    Hansen, Candice; Bolton, S.; Caplinger, M.; Dyches, P.; Jensen, E.; Levin, S.; Ravine, M.

    2012-10-01

    The camera on the Juno spacecraft is part of the payload specifically for public outreach. Juno’s JunoCam camera team will rely on public participation to accomplish our goals. Our theme is “science in a fishbowl” - execution of camera operation includes several amateur communities playing essential roles, and the public to help make decisions. JunoCam is a push-frame imager with 4 filters, built by Malin Space Science Systems (MSSS). It uses the Juno spacecraft rotation to sweep its field of view across the planet. Its wide field of view (58 deg) is optimized to take advantage of Juno’s polar orbit, yielding images of the poles with 50 km spatial scale. At perijove of Juno’s elliptical orbit images will have 3 km spatial scale. Jupiter is a dynamic planet - timely images of its cloudtops from amateur astronomers will be used to simulate what may be in the camera field of view at a given time. We are developing a website to organize contributions from amateur astronomers and tools to predict ahead where storms will be. Students will lead blog discussions (or the 2016 equivalent) on the merits of imaging any given target and the entire public is invited to weigh in on both the merits and the actual decision of what images to acquire. Images will be available within days for the public to process. The JunoCam team is relying on the amateur image processing community for color products, maps, and movies. When Junocam acquires images of the Earth in October 2013, we will use the opportunity to gain experience operating the instrument with public involvement. Although we will have a professional ops team at MSSS, the tiny size of the team overall means that the public participation is not just an extra - it is essential to our success.

  2. Capillary absorption spectrometer and process for isotopic analysis of small samples

    DOEpatents

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2016-03-29

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The method also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  3. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  4. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  5. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  6. Multiplexing and de-multiplexing with scattering media for large field of view and multispectral imaging

    NASA Astrophysics Data System (ADS)

    Sahoo, Sujit Kumar; Tang, Dongliang; Dang, Cuong

    2018-02-01

    Large field of view multispectral imaging through scattering medium is a fundamental quest in optics community. It has gained special attention from researchers in recent years for its wide range of potential applications. However, the main bottlenecks of the current imaging systems are the requirements on specific illumination, poor image quality and limited field of view. In this work, we demonstrated a single-shot high-resolution colour-imaging through scattering media using a monochromatic camera. This novel imaging technique is enabled by the spatial, spectral decorrelation property and the optical memory effect of the scattering media. Moreover the use of deconvolution image processing further annihilate above-mentioned drawbacks arise due iterative refocusing, scanning or phase retrieval procedures.

  7. Lithospheric Structure and Dynamics: Insights Facilitated by the IRIS/PASSCAL Facility

    NASA Astrophysics Data System (ADS)

    Meltzer, A.

    2002-12-01

    Through the development of community-based facilities in portable array seismology, a wide-range of seismic methods are now standard tools for imaging the Earth's interior, extending geologic observations made at the surface to depth. The IRIS/PASSCAL program provides the seismological community with the ability to routinely field experimental programs, from high-resolution seismic reflection profiling of the near surface to lithospheric scale imaging with both active and passive source arrays, to understand the tectonic evolution of continents, how they are assembled, disassembled, and modified through time. As our ability to record and process large volumes of data has improved we have moved from simple 1-D velocity models and 2-D structural cross sections of the subsurface to 3-D and 4-D images to correlate complex surface tectonics to processes in the Earth's interior. Data from individual IRIS/PASSCAL experiments has fostered multidisciplinary studies, bringing together geologists, geochemists, and geophysicists to work together on common problems. As data is collected from a variety of tectonic environments around the globe common elements begin to emerge. We now recognize and study the inherent lateral and vertical heterogeneity in the crust and mantle lithosphere and its role in controlling deformation, the importance of low velocity mobile mantle in supporting topography, and the importance of fluids and fluid migration in magmatic and deformational processes. We can image and map faults, fault zones, and fault networks to study them as systems rather than isolated planes of deformation to better understand earthquake nucleation, rupture, and propagation. An additional benefit of these community-based facilities is the pooling of resources to develop effective and sustainable education and outreach programs. These programs attract new students to pursue careers in earth science, engage the general public in the scientific enterprise, raise the profile of the earth sciences, and reveal the importance of earth processes in shaping the environment in which we live. Future challenges facing our community include continued evolution of existing facilities to keep pace with scientific inquiry, routinely utilizing fully 3-D and where appropriate 4-D data sets to understand earth structure and dynamics, and the manipulation, and analysis of large multidisciplinary data sets. Community models should be considered as a mechanism to integrate, analyze, and share data and results within a process oriented framework. Exciting developments on the horizon include EarthScope. To maximize the potential for significant advances in our understanding of tectonic processes, observations from new EarthScope facilities must be integrated with additional geologic data sets of similar quality and resolution. New real-time data streams combined with new data integration, analysis, and visualization tools will provide us with the ability to integrate data across a continuous range of spatial scales providing a new and coherent view of lithospheric dynamics from local to plate scale.

  8. Annotating images by mining image search results.

    PubMed

    Wang, Xin-Jing; Zhang, Lei; Li, Xirong; Ma, Wei-Ying

    2008-11-01

    Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search results. Some 2.4 million images with their surrounding text are collected from a few photo forums to support this approach. The entire process is formulated in a divide-and-conquer framework where a query keyword is provided along with the uncaptioned image to improve both the effectiveness and efficiency. This is helpful when the collected data set is not dense everywhere. In this sense, our approach contains three steps: 1) the search process to discover visually and semantically similar search results, 2) the mining process to identify salient terms from textual descriptions of the search results, and 3) the annotation rejection process to filter out noisy terms yielded by Step 2. To ensure real-time annotation, two key techniques are leveraged-one is to map the high-dimensional image visual features into hash codes, the other is to implement it as a distributed system, of which the search and mining processes are provided as Web services. As a typical result, the entire process finishes in less than 1 second. Since no training data set is required, our approach enables annotating with unlimited vocabulary and is highly scalable and robust to outliers. Experimental results on both real Web images and a benchmark image data set show the effectiveness and efficiency of the proposed algorithm. It is also worth noting that, although the entire approach is illustrated within the divide-and conquer framework, a query keyword is not crucial to our current implementation. We provide experimental results to prove this.

  9. Automatic detection of surface changes on Mars - a status report

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter

    2016-10-01

    Orbiter missions have acquired approximately 500,000 high-resolution visible images of the Martian surface, covering an area approximately 6 times larger than the overall area of Mars. This data abundance allows the scientific community to examine the Martian surface thoroughly and potentially make exciting new discoveries. However, the increased data volume, as well as its complexity, generate problems at the data processing stages, which are mainly related to a number of unresolved issues that batch-mode planetary data processing presents. As a matter of fact, the scientific community is currently struggling to scale the common ("one-at-a-time" processing of incoming products by expert scientists) paradigm to tackle the large volumes of input data. Moreover, expert scientists are more or less forced to use complex software in order to extract input information for their research from raw data, even though they are not data scientists themselves.Our work within the STFC and EU FP7 i-Mars projects aims at developing automated software that will process all of the acquired data, leaving domain expert planetary scientists to focus on their final analysis and interpretation. Moreover, after completing the development of a fully automated pipeline that processes automatically the co-registration of high-resolution NASA images to ESA/DLR HRSC baseline, our main goal has shifted to the automated detection of surface changes on Mars. In particular, we are developing a pipeline that uses as an input multi-instrument image pairs, which are processed by an automated pipeline, in order to identify changes that are correlated with Mars surface dynamic phenomena. The pipeline has currently been tested in anger on 8,000 co-registered images and by the time of DPS/EPSC we expect to have processed many tens of thousands of image pairs, producing a set of change detection results, a subset of which will be shown in the presentation.The research leading to these results has received funding from the STFC "MSSL Consolidated Grant under "Planetary Surface Data Mining" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement number 607379

  10. Mathematical problems in the application of multilinear models to facial emotion processing experiments

    NASA Astrophysics Data System (ADS)

    Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.

    2000-10-01

    In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.

  11. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    PubMed Central

    Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576

  12. PlantCV v2: Image analysis software for high-throughput plant phenotyping.

    PubMed

    Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.

  13. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  14. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE PAGES

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...

    2017-12-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  15. Shuttle Entry Imaging Using Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Horvath, Thomas; Berry, Scott; Alter, Stephen; Blanchard, Robert; Schwartz, Richard; Ross, Martin; Tack, Steve

    2007-01-01

    During the Columbia Accident Investigation, imaging teams supporting debris shedding analysis were hampered by poor entry image quality and the general lack of information on optical signatures associated with a nominal Shuttle entry. After the accident, recommendations were made to NASA management to develop and maintain a state-of-the-art imagery database for Shuttle engineering performance assessments and to improve entry imaging capability to support anomaly and contingency analysis during a mission. As a result, the Space Shuttle Program sponsored an observation campaign to qualitatively characterize a nominal Shuttle entry over the widest possible Mach number range. The initial objectives focused on an assessment of capability to identify/resolve debris liberated from the Shuttle during entry, characterization of potential anomalous events associated with RCS jet firings and unusual phenomenon associated with the plasma trail. The aeroheating technical community viewed the Space Shuttle Program sponsored activity as an opportunity to influence the observation objectives and incrementally demonstrate key elements of a quantitative spatially resolved temperature measurement capability over a series of flights. One long-term desire of the Shuttle engineering community is to calibrate boundary layer transition prediction methodologies that are presently part of the Shuttle damage assessment process using flight data provided by a controlled Shuttle flight experiment. Quantitative global imaging may offer a complementary method of data collection to more traditional methods such as surface thermocouples. This paper reviews the process used by the engineering community to influence data collection methods and analysis of global infrared images of the Shuttle obtained during hypersonic entry. Emphasis is placed upon airborne imaging assets sponsored by the Shuttle program during Return to Flight. Visual and IR entry imagery were obtained with available airborne imaging platforms used within DoD along with agency assets developed and optimized for use during Shuttle ascent to demonstrate capability (i.e., tracking, acquisition of multispectral data, spatial resolution) and identify system limitations (i.e., radiance modeling, saturation) using state-of-the-art imaging instrumentation and communication systems. Global infrared intensity data have been transformed to temperature by comparison to Shuttle flight thermocouple data. Reasonable agreement is found between the flight thermography images and numerical prediction. A discussion of lessons learned and potential application to a potential Shuttle boundary layer transition flight test is presented.

  16. The Spectral Image Processing System (SIPS) - Interactive visualization and analysis of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1993-01-01

    The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).

  17. sTools - a data reduction pipeline for the GREGOR Fabry-Pérot Interferometer and the High-resolution Fast Imager at the GREGOR solar telescope

    NASA Astrophysics Data System (ADS)

    Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.

    2017-10-01

    A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.

  18. Color engineering in the age of digital convergence

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.

    1998-09-01

    Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.

  19. A De-Identification Pipeline for Ultrasound Medical Images in DICOM Format.

    PubMed

    Monteiro, Eriksson; Costa, Carlos; Oliveira, José Luís

    2017-05-01

    Clinical data sharing between healthcare institutions, and between practitioners is often hindered by privacy protection requirements. This problem is critical in collaborative scenarios where data sharing is fundamental for establishing a workflow among parties. The anonymization of patient information burned in DICOM images requires elaborate processes somewhat more complex than simple de-identification of textual information. Usually, before sharing, there is a need for manual removal of specific areas containing sensitive information in the images. In this paper, we present a pipeline for ultrasound medical image de-identification, provided as a free anonymization REST service for medical image applications, and a Software-as-a-Service to streamline automatic de-identification of medical images, which is freely available for end-users. The proposed approach applies image processing functions and machine-learning models to bring about an automatic system to anonymize medical images. To perform character recognition, we evaluated several machine-learning models, being Convolutional Neural Networks (CNN) selected as the best approach. For accessing the system quality, 500 processed images were manually inspected showing an anonymization rate of 89.2%. The tool can be accessed at https://bioinformatics.ua.pt/dicom/anonymizer and it is available with the most recent version of Google Chrome, Mozilla Firefox and Safari. A Docker image containing the proposed service is also publicly available for the community.

  20. A PDS Archive for Observations of Mercury's Na Exosphere

    NASA Astrophysics Data System (ADS)

    Backes, C.; Cassidy, T.; Merkel, A. W.; Killen, R. M.; Potter, A. E.

    2016-12-01

    We present a data product consisting of ground-based observations of Mercury's sodium exosphere. We have amassed a sizeable dataset of several thousand spectral observations of Mercury's exosphere from the McMath-Pierce solar telescope. Over the last year, a data reduction pipeline has been developed and refined to process and reconstruct these spectral images into low resolution images of sodium D2 emission. This dataset, which extends over two decades, will provide an unprecedented opportunity to analyze the dynamics of Mercury's mid to high-latitude exospheric emissions, which have long been attributed to solar wind ion bombardment. This large archive of observations will be of great use to the Mercury science community in studying the effects of space weather on Mercury's tenuous exosphere. When completely processed, images in this dataset will show the observed spatial distribution of Na D2 in the Mercurian exosphere, have measurements of this sodium emission per pixel in units of kilorayleighs, and be available through NASA's Planetary Data System. The overall goal of the presentation will be to provide the Planetary Science community with a clear picture of what information and data this archival product will make available.

  1. 3D Imaging of Microbial Biofilms: Integration of Synchrotron Imaging and an Interactive Visualization Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.

    2014-08-26

    Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.

  2. Effects of "Good News" and "Bad News" on Newscast Image and Community Image.

    ERIC Educational Resources Information Center

    Galician, Mary-Lou; Vestre, Norris D.

    1987-01-01

    Investigates whether the relative amount of bad, neutral, and good news on television has corresponding effects on viewers' image of the community depicted and of the carrying newscast. Concludes that bad news creates a bad image for the community but that good news does not produce a more favorable image than neutral news. (MM)

  3. Developing a tablet computer-based application ('App') to measure self-reported alcohol consumption in Indigenous Australians.

    PubMed

    Lee, K S Kylie; Wilson, Scott; Perry, Jimmy; Room, Robin; Callinan, Sarah; Assan, Robert; Hayman, Noel; Chikritzhs, Tanya; Gray, Dennis; Wilkes, Edward; Jack, Peter; Conigrave, Katherine M

    2018-01-15

    The challenges of assessing alcohol consumption can be greater in Indigenous communities where there may be culturally distinct approaches to communication, sharing of drinking containers and episodic patterns of drinking. This paper discusses the processes used to develop a tablet computer-based application ('App') to collect a detailed assessment of drinking patterns in Indigenous Australians. The key features of the resulting App are described. An iterative consultation process was used (instead of one-off focus groups), with Indigenous cultural experts and clinical experts. Regular (weekly or more) advice was sought over a 12-month period from Indigenous community leaders and from a range of Indigenous and non-Indigenous health professionals and researchers. The underpinning principles, selected survey items, and key technical features of the App are described. Features include culturally appropriate questioning style and gender-specific voice and images; community-recognised events used as reference points to 'anchor' time periods; 'translation' to colloquial English and (for audio) to traditional language; interactive visual approaches to estimate quantity of drinking; images of specific brands of alcohol, rather than abstract description of alcohol type (e.g. 'spirits'); images of make-shift drinking containers; option to estimate consumption based on the individual's share of what the group drank. With any survey platform, helping participants to accurately reflect on and report their drinking presents a challenge. The availability of interactive, tablet-based technologies enables potential bridging of differences in culture and lifestyle and enhanced reporting.

  4. Advancements in medium and high resolution Earth observation for land-surface imaging: Evolutions, future trends and contributions to sustainable development

    NASA Astrophysics Data System (ADS)

    Ouma, Yashon O.

    2016-01-01

    Technologies for imaging the surface of the Earth, through satellite based Earth observations (EO) have enormously evolved over the past 50 years. The trends are likely to evolve further as the user community increases and their awareness and demands for EO data also increases. In this review paper, a development trend on EO imaging systems is presented with the objective of deriving the evolving patterns for the EO user community. From the review and analysis of medium-to-high resolution EO-based land-surface sensor missions, it is observed that there is a predictive pattern in the EO evolution trends such that every 10-15 years, more sophisticated EO imaging systems with application specific capabilities are seen to emerge. Such new systems, as determined in this review, are likely to comprise of agile and small payload-mass EO land surface imaging satellites with the ability for high velocity data transmission and huge volumes of spatial, spectral, temporal and radiometric resolution data. This availability of data will magnify the phenomenon of ;Big Data; in Earth observation. Because of the ;Big Data; issue, new computing and processing platforms such as telegeoprocessing and grid-computing are expected to be incorporated in EO data processing and distribution networks. In general, it is observed that the demand for EO is growing exponentially as the application and cost-benefits are being recognized in support of resource management.

  5. Decoding molecular interactions in microbial communities

    PubMed Central

    Abreu, Nicole A.; Taga, Michiko E.

    2016-01-01

    Microbial communities govern numerous fundamental processes on earth. Discovering and tracking molecular interactions among microbes is critical for understanding how single species and complex communities impact their associated host or natural environment. While recent technological developments in DNA sequencing and functional imaging have led to new and deeper levels of understanding, we are limited now by our inability to predict and interpret the intricate relationships and interspecies dependencies within these communities. In this review, we highlight the multifaceted approaches investigators have taken within their areas of research to decode interspecies molecular interactions that occur between microbes. Understanding these principles can give us greater insight into ecological interactions in natural environments and within synthetic consortia. PMID:27417261

  6. The ESA/ESO/NASA Photoshop FITS Liberator 3: Have your say on new features

    NASA Astrophysics Data System (ADS)

    Nielsen, L. H.; Christensen, L. L.; Hurt, R. L.; Nielsen, K.; Johansen, T.

    2008-06-01

    The popular, free ESA/ESO/NASA Photoshop FITS Liberator image processing software (a plugin for Adobe Photoshop) is about to get simpler, faster and more user-friendly! Here we would like to solicit inputs from the community of users.

  7. New Images for Adult Education.

    ERIC Educational Resources Information Center

    Cross, K. Patricia

    1988-01-01

    Argues that the pace of change in today's society demands that community service educators promote lifelong learning by projecting continuing education not as a product to be purchased, but as an unending, interactive, personal, and individualized process, more like a fitness center than a shopping mall. (DMM)

  8. Taking the Mystery Out of Marketing.

    ERIC Educational Resources Information Center

    Fuller, Donald A.

    1982-01-01

    Seeks to clarify the marketing process in the promotion of a school's educational offerings and the school's image within the community. Divides activities into advertising, personal selling, sales promotion, and publicity. Includes a sample promotional plan which identifies objectives and tasks required for development and implementation. (DMM)

  9. High-cadence Imaging and Imaging Spectroscopy at the GREGOR Solar Telescope—A Collaborative Research Environment for High-resolution Solar Physics

    NASA Astrophysics Data System (ADS)

    Denker, Carsten; Kuckein, Christoph; Verma, Meetu; González Manrique, Sergio J.; Diercke, Andrea; Enke, Harry; Klar, Jochen; Balthasar, Horst; Louis, Rohan E.; Dineva, Ekaterina

    2018-05-01

    In high-resolution solar physics, the volume and complexity of photometric, spectroscopic, and polarimetric ground-based data significantly increased in the last decade, reaching data acquisition rates of terabytes per hour. This is driven by the desire to capture fast processes on the Sun and the necessity for short exposure times “freezing” the atmospheric seeing, thus enabling ex post facto image restoration. Consequently, large-format and high-cadence detectors are nowadays used in solar observations to facilitate image restoration. Based on our experience during the “early science” phase with the 1.5 m GREGOR solar telescope (2014–2015) and the subsequent transition to routine observations in 2016, we describe data collection and data management tailored toward image restoration and imaging spectroscopy. We outline our approaches regarding data processing, analysis, and archiving for two of GREGOR’s post-focus instruments (see http://gregor.aip.de), i.e., the GREGOR Fabry–Pérot Interferometer (GFPI) and the newly installed High-Resolution Fast Imager (HiFI). The heterogeneous and complex nature of multidimensional data arising from high-resolution solar observations provides an intriguing but also a challenging example for “big data” in astronomy. The big data challenge has two aspects: (1) establishing a workflow for publishing the data for the whole community and beyond and (2) creating a collaborative research environment (CRE), where computationally intense data and postprocessing tools are colocated and collaborative work is enabled for scientists of multiple institutes. This requires either collaboration with a data center or frameworks and databases capable of dealing with huge data sets based on virtual observatory (VO) and other community standards and procedures.

  10. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems II. Extension to the thermal infrared: equations and methods

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Lomheim, Terrence S.; Florio, Christopher J.; Harbold, Jeffrey M.; Muto, B. Michael; Schoolar, Richard B.; Wintz, Daniel T.; Keller, Robert A.

    2011-10-01

    In a previous paper in this series, we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) tool may be used to model space and airborne imaging systems operating in the visible to near-infrared (VISNIR). PICASSO is a systems-level tool, representative of a class of such tools used throughout the remote sensing community. It is capable of modeling systems over a wide range of fidelity, anywhere from conceptual design level (where it can serve as an integral part of the systems engineering process) to as-built hardware (where it can serve as part of the verification process). In the present paper, we extend the discussion of PICASSO to the modeling of Thermal Infrared (TIR) remote sensing systems, presenting the equations and methods necessary to modeling in that regime.

  11. Protein Crystal Growth

    NASA Technical Reports Server (NTRS)

    2003-01-01

    In order to rapidly and efficiently grow crystals, tools were needed to automatically identify and analyze the growing process of protein crystals. To meet this need, Diversified Scientific, Inc. (DSI), with the support of a Small Business Innovation Research (SBIR) contract from NASA s Marshall Space Flight Center, developed CrystalScore(trademark), the first automated image acquisition, analysis, and archiving system designed specifically for the macromolecular crystal growing community. It offers automated hardware control, image and data archiving, image processing, a searchable database, and surface plotting of experimental data. CrystalScore is currently being used by numerous pharmaceutical companies and academic and nonprofit research centers. DSI, located in Birmingham, Alabama, was awarded the patent Method for acquiring, storing, and analyzing crystal images on March 4, 2003. Another DSI product made possible by Marshall SBIR funding is VaporPro(trademark), a unique, comprehensive system that allows for the automated control of vapor diffusion for crystallization experiments.

  12. Imaging cell competition in Drosophila imaginal discs.

    PubMed

    Ohsawa, Shizue; Sugimura, Kaoru; Takino, Kyoko; Igaki, Tatsushi

    2012-01-01

    Cell competition is a process in which cells with higher fitness ("winners") survive and proliferate at the expense of less fit neighbors ("losers"). It has been suggested that cell competition is involved in a variety of biological processes such as organ size control, tissue homeostasis, cancer progression, and the maintenance of stem cell population. By advent of a genetic mosaic technique, which enables to generate fluorescently marked somatic clones in Drosophila imaginal discs, recent studies have presented some aspects of molecular mechanisms underlying cell competition. Now, with a live-imaging technique using ex vivo-cultured imaginal discs, we can dissect the spatiotemporal nature of competitive cell behaviors within multicellular communities. Here, we describe procedures and tips for live imaging of cell competition in Drosophila imaginal discs. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Urban Space Innovation - “10+” Principles through Designing the New Image of the Existing Shopping Mall in Csepel, Hungary

    NASA Astrophysics Data System (ADS)

    Gyergyak, Janos

    2017-10-01

    The first part of the paper is about to introduce the principles of “placemaking” as an innovation and important tool of the cities in the 21st century. The process helps designers to transform the spaces of “nobody” to a community-based space for supporting the connection among humans. The second part of the paper shows the process of the used principles by the author for designing the new image of the existing shopping mall in Csepel, Hungary. This work was selected as one of the best design ideas for renewing the existing underutilized space.

  14. Geology

    NASA Technical Reports Server (NTRS)

    Stewart, R. K.; Sabins, F. F., Jr.; Rowan, L. C.; Short, N. M.

    1975-01-01

    Papers from private industry reporting applications of remote sensing to oil and gas exploration were presented. Digitally processed LANDSAT images were successfully employed in several geologic interpretations. A growing interest in digital image processing among the geologic user community was shown. The papers covered a wide geographic range and a wide technical and application range. Topics included: (1) oil and gas exploration, by use of radar and multisensor studies as well as by use of LANDSAT imagery or LANDSAT digital data, (2) mineral exploration, by mapping from LANDSAT and Skylab imagery and by LANDSAT digital processing, (3) geothermal energy studies with Skylab imagery, (4) environmental and engineering geology, by use of radar or LANDSAT and Skylab imagery, (5) regional mapping and interpretation, and digital and spectral methods.

  15. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    PubMed

    Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang

    2015-04-01

    Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.

  16. The Montage architecture for grid-enabled science processing of large, distributed datasets

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui

    2004-01-01

    Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.

  17. Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images.

    PubMed

    Maier-Hein, Lena; Mersmann, Sven; Kondermann, Daniel; Bodenstedt, Sebastian; Sanchez, Alexandro; Stock, Christian; Kenngott, Hannes Gotz; Eisenmann, Mathias; Speidel, Stefanie

    2014-01-01

    Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.

  18. Dorsolateral prefrontal cortex activation during emotional anticipation and neuropsychological performance in posttraumatic stress disorder.

    PubMed

    Aupperle, Robin L; Allard, Carolyn B; Grimes, Erin M; Simmons, Alan N; Flagan, Taru; Behrooznia, Michelle; Cissell, Shadha H; Twamley, Elizabeth W; Thorp, Steven R; Norman, Sonya B; Paulus, Martin P; Stein, Murray B

    2012-04-01

    Posttraumatic stress disorder (PTSD) has been associated with executive or attentional dysfunction and problems in emotion processing. However, it is unclear whether these two domains of dysfunction are related to common or distinct neurophysiological substrates. To examine the hypothesis that greater neuropsychological impairment in PTSD relates to greater disruption in prefrontal-subcortical networks during emotional anticipation. Case-control, cross-sectional study. General community and hospital and community psychiatric clinics. Volunteer sample of 37 women with PTSD related to intimate partner violence and 34 age-comparable healthy control women. We used functional magnetic resonance imaging (fMRI) to examine neural responses during anticipation of negative and positive emotional images. The Clinician-Administered PTSD Scale was used to characterize PTSD symptom severity. The Wechsler Adult Intelligence Scale, Third Edition, Digit Symbol Test, Delis-Kaplan Executive Function System Color-Word Interference Test, and Wisconsin Card Sorting Test were used to characterize neuropsychological performance. Women with PTSD performed worse on complex visuomotor processing speed (Digit Symbol Test) and executive function (Color-Word Interference Inhibition/Switching subtest) measures compared with control subjects. Posttraumatic stress disorder was associated with greater anterior insula and attenuated lateral prefrontal cortex (PFC) activation during emotional anticipation. Greater dorsolateral PFC activation (anticipation of negative images minus anticipation of positive images) was associated with lower PTSD symptom severity and better visuomotor processing speed and executive functioning. Greater medial PFC and amygdala activation related to slower visuomotor processing speed. During emotional anticipation, women with PTSD show exaggerated activation in the anterior insula, a region important for monitoring internal bodily state. Greater dorsolateral PFC response in PTSD patients during emotional anticipation may reflect engagement of cognitive control networks that are beneficial for emotional and cognitive functioning. Novel treatments could be aimed at strengthening the balance between cognitive control (dorsolateral PFC) and affective processing (medial PFC and amygdala) networks to improve overall functioning for PTSD patients.

  19. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  20. Development of imaging biomarkers and generation of big data.

    PubMed

    Alberich-Bayarri, Ángel; Hernández-Navarro, Rafael; Ruiz-Martínez, Enrique; García-Castro, Fabio; García-Juan, David; Martí-Bonmatí, Luis

    2017-06-01

    Several image processing algorithms have emerged to cover unmet clinical needs but their application to radiological routine with a clear clinical impact is still not straightforward. Moving from local to big infrastructures, such as Medical Imaging Biobanks (millions of studies), or even more, Federations of Medical Imaging Biobanks (in some cases totaling to hundreds of millions of studies) require the integration of automated pipelines for fast analysis of pooled data to extract clinically relevant conclusions, not uniquely linked to medical imaging, but in combination to other information such as genetic profiling. A general strategy for the development of imaging biomarkers and their integration in the cloud for the quantitative management and exploitation in large databases is herein presented. The proposed platform has been successfully launched and is being validated nowadays among the early adopters' community of radiologists, clinicians, and medical imaging researchers.

  1. JunoCam Outreach: Lessons Learned from Juno's Earth Flyby

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Caplinger, M. A.; Ravine, M. A.

    2014-12-01

    The JunoCam visible imager is on the Juno spacecraft explicitly to include the public in the operation of a spacecraft instrument at Jupiter. Amateur astronomers will provide images in 2015 and 2016, as the spacecraft approaches Jupiter, to be used for planning purposes, and also during the mission to provide context for JunoCam's high-resolution pictures. Targeted imaging of specific features would enhance science value, but the dynamic nature of the jovian atmosphere makes this almost completely dependent on ground-based observations. The public will be involved in the decision of which images to acquire in each perijove pass. Partnership with the amateur image processing community will be essential for processing images during the Juno mission. This piece of the virtual team plan was successfully carried out as Juno executed its earth flyby gravity assist in 2013. Although we will have a professional ops team at Malin Space Science Systems, the tiny size of the team overall means that the public participation is not just an extra - it is essential to our success.

  2. Active learning methods for interactive image retrieval.

    PubMed

    Gosselin, Philippe Henri; Cord, Matthieu

    2008-07-01

    Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.

  3. Automatic Coregistration and orthorectification (ACRO) and subsequent mosaicing of NASA high-resolution imagery over the Mars MC11 quadrangle, using HRSC as a baseline

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian

    2018-02-01

    This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.

  4. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Mars Data analysis and visualization with Marsoweb

    NASA Astrophysics Data System (ADS)

    Gulick, V. G.; Deardorff, D. G.

    2003-04-01

    Marsoweb is a collaborative web environment that has been developed for the Mars research community to better visualize and analyze Mars orbiter data. Its goal is to enable online data discovery by providing an intuitive, interactive interface to data from the Mars Global Surveyor and other orbiters. Recently Marsoweb has served a prominent role as a resource center for the site selection process for the Mars Explorer Rover 2003 missions. In addition to hosting a repository of landing site memoranda and workshop talks, it includes a Java-based interface to a variety of data maps and images. This interface enables the display and numerical querying of data, and allows data profiles to be rendered from user-drawn cross-sections. High-resolution Mars Orbiter Camera (MOC) images (currently, over 100,000) can be graphically perused; browser-based image processing tools can be used on MOC images of potential landing sites. An automated VRML atlas allows users to construct "flyovers" of their own regions-of-interest in 3D. These capabilities enable Marsoweb to be used for general global data studies, in addition to those specific to landing site selection. As of December 2002, Marsoweb has been viewed by 88,000 distinct users with a total of 3.3 million hits (801,000 page requests in all) from NASA, USGS, academia, and the general public have accessed Marsoweb. The High Resolution Imaging Experiment team for the Mars 2005 Orbiter (HiRISE, PI Alfred McEwen) plans to cast a wide net to collect targeting suggestions. Members of the general public as well as the broad Mars science community will be able to submit suggestions of high resolution imaging targets. The web-based interface for target suggestion input (HiWeb) will be based upon Marsoweb (http://marsoweb.nas.nasa.gov).

  6. The Very Large Array Data Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an international consortium of scientists and software developers based at the National Radio Astronomical Observatory (NRAO), the European Southern Observatory (ESO), and the National Astronomical Observatory of Japan (NAOJ).

  7. Standardization of left atrial, right ventricular, and right atrial deformation imaging using two-dimensional speckle tracking echocardiography: a consensus document of the EACVI/ASE/Industry Task Force to standardize deformation imaging.

    PubMed

    Badano, Luigi P; Kolias, Theodore J; Muraru, Denisa; Abraham, Theodore P; Aurigemma, Gerard; Edvardsen, Thor; D'Hooge, Jan; Donal, Erwan; Fraser, Alan G; Marwick, Thomas; Mertens, Luc; Popescu, Bogdan A; Sengupta, Partho P; Lancellotti, Patrizio; Thomas, James D; Voigt, Jens-Uwe

    2018-03-27

    The EACVI/ASE/Industry Task Force to standardize deformation imaging prepared this consensus document to standardize definitions and techniques for using two-dimensional (2D) speckle tracking echocardiography (STE) to assess left atrial, right ventricular, and right atrial myocardial deformation. This document is intended for both the technical engineering community and the clinical community at large to provide guidance on selecting the functional parameters to measure and how to measure them using 2D STE.This document aims to represent a significant step forward in the collaboration between the scientific societies and the industry since technical specifications of the software packages designed to post-process echocardiographic datasets have been agreed and shared before their actual development. Hopefully, this will lead to more clinically oriented software packages which will be better tailored to clinical needs and will allow industry to save time and resources in their development.

  8. Investigating Bacterial-Animal Symbioses with Light Sheet Microscopy

    PubMed Central

    Taormina, Michael J.; Jemielita, Matthew; Stephens, W. Zac; Burns, Adam R.; Troll, Joshua V.; Parthasarathy, Raghuveer; Guillemin, Karen

    2014-01-01

    SUMMARY Microbial colonization of the digestive tract is a crucial event in vertebrate development, required for maturation of host immunity and establishment of normal digestive physiology. Advances in genomic, proteomic, and metabolomic technologies are providing a more detailed picture of the constituents of the intestinal habitat, but these approaches lack the spatial and temporal resolution needed to characterize the assembly and dynamics of microbial communities in this complex environment. We report the use of light sheet microscopy to provide high resolution imaging of bacterial colonization of the zebrafish intestine. The methodology allows us to characterize bacterial population dynamics across the entire organ and the behaviors of individual bacterial and host cells throughout the colonization process. The large four-dimensional datasets generated by these imaging approaches require new strategies for image analysis. When integrated with other “omics” datasets, information about the spatial and temporal dynamics of microbial cells within the vertebrate intestine will provide new mechanistic insights into how microbial communities assemble and function within hosts. PMID:22983029

  9. Detecting Below-Ground Processes, Diversity, and Ecosystem Function in a Savanna Ecosystem Using Spectroscopy Across Different Vegetation Layers

    NASA Astrophysics Data System (ADS)

    Cavender-Bares, J.; Schweiger, A. K.; Madritch, M. D.; Gamon, J. A.; Hobbie, S. E.; Montgomery, R.; Townsend, P. A.

    2017-12-01

    Above-and below-ground plant traits are important for substrate input to the rhizosphere. The substrate composition of the rhizosphere, in turn, affects the diversity of soil organisms, influences soil biochemistry, and water content, and resource availability for plant growth. This has substantial consequences for ecosystem functions, such as above-ground productivity and stability. Above-ground plant chemical and structural traits can be linked to the characteristics of other plant organs, including roots. Airborne imaging spectroscopy has been successfully used to model and predict chemical and structural traits of the above-ground vegetation. However, remotely sensed images capture, almost exclusively, signals from the top of the canopy, providing limited direct information about understory vegetation. Here, we use a data set collected in a savanna ecosystem consisting of spectral measurements gathered at the leaf, the whole plant, and vegetation canopy level to test for hypothesized linkages between above- and below-ground processes that influence root biomass, soil biochemistry, and the diversity of the soil community. In this environment, consisting of herbaceous vegetation intermixed with shrubs and trees growing at variable densities, we investigate the contribution of different vegetation strata to soil characteristics and test the ability of imaging spectroscopy to detect these in plant communities with contrasting vertical structure.

  10. Community tools for cartographic and photogrammetric processing of Mars Express HRSC images

    USGS Publications Warehouse

    Kirk, Randolph L.; Howington-Kraus, Elpitha; Edmundson, Kenneth L.; Redding, Bonnie L.; Galuszka, Donna M.; Hare, Trent M.; Gwinner, K.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.

    2017-01-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged ~ 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was necessary to split observations into blocks of constant exposure time, greatly increasing the effort needed to control the images and collect DTMs. Here, we describe a substantially improved HRSC processing capability that incorporates sensor models with varying line timing in the current ISIS3 system (Sides 2017) and SOCET SET. This enormously reduces the work effort for processing most images and eliminates the artifacts that arose from segmenting them. In addition, the software takes advantage of the continuously evolving capabilities of ISIS3 and the improved image matching module NGATE (Next Generation Automatic Terrain Extraction, incorporating area and feature based algorithms, multi-image and multi-direction matching) of SOCET SET, thus greatly reducing the need for manual editing of DTM errors. We have also developed a procedure for geodetically controlling the images to Mars Orbiter Laser Altimeter (MOLA) data by registering a preliminary stereo topographic model to MOLA by using the point cloud alignment (pc_align) function of the NASA Ames Stereo Pipeline (ASP; Moratto et al. 2010). This effectively converts inter-image tiepoints into ground control points in the MOLA coordinate system. The result is improved absolute accuracy and a significant reduction in work effort relative to manual measurement of ground control. The ISIS and ASP software used are freely available; SOCET SET, is a commercial product. By the end of 2017 we expect to have ported our SOCET SET HRSC sensor model to the Community Sensor Model (CSM; Community Sensor Model Working Group 2010; Hare and Kirk 2017) standard utilized by the successor photogrammetric system SOCET GXP that is currently offered by BAE. In early 2018, we are also working with BAE to release the CSM source code under a BSD or MIT open source license. 

  11. Going the Distance: Taking a Diagnostic Imaging Program to Frontier and Rural Oregon

    ERIC Educational Resources Information Center

    Malosh, Ann; Mallory, Stacy; Olson, Marcene

    2009-01-01

    The Grow Your Own diagnostic imaging program is a public/private collaborative venture involving the efforts of an array of community colleges, employers, workforce, and educational partners throughout Oregon. This statewide Community College Partnership delivers diagnostic imaging education to Oregon's rural communities via distributed learning…

  12. The imaging 3.0 informatics scorecard.

    PubMed

    Kohli, Marc; Dreyer, Keith J; Geis, J Raymond

    2015-04-01

    Imaging 3.0 is a radiology community initiative to empower radiologists to create and demonstrate value for their patients, referring physicians, and health systems. In image-guided health care, radiologists contribute to the entire health care process, well before and after the actual examination, and out to the point at which they guide clinical decisions and affect patient outcome. Because imaging is so pervasive, radiologists who adopt Imaging 3.0 concepts in their practice can help their health care systems provide consistently high-quality care at reduced cost. By doing this, radiologists become more valuable in the new health care setting. The authors describe how informatics is critical to embracing Imaging 3.0 and present a scorecard that can be used to gauge a radiology group's informatics resources and capabilities. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  13. A survey of GPU-based acceleration techniques in MRI reconstructions

    PubMed Central

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou

    2018-01-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community. PMID:29675361

  14. A survey of GPU-based acceleration techniques in MRI reconstructions.

    PubMed

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou; Liang, Dong

    2018-03-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community.

  15. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.

  16. High-energy proton imaging for biomedical applications

    DOE PAGES

    Prall, Matthias; Durante, Marco; Berger, Thomas; ...

    2016-06-10

    The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allowsmore » imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. As a result, tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.« less

  17. High-energy proton imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Prall, M.; Durante, M.; Berger, T.; Przybyla, B.; Graeff, C.; Lang, P. M.; Latessa, C.; Shestov, L.; Simoniello, P.; Danly, C.; Mariam, F.; Merrill, F.; Nedrow, P.; Wilde, C.; Varentsov, D.

    2016-06-01

    The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allows imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. Tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.

  18. High-energy proton imaging for biomedical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prall, Matthias; Durante, Marco; Berger, Thomas

    The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allowsmore » imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. As a result, tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.« less

  19. Geosynchronous Meteorological Satellite Data Seminar

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A seminar was organized by NASA to acquaint the meteorological community with data now available, and data scheduled to be available in the future, from geosynchronous meteorological satellites. The twenty-four papers were presented in three half-day sessions in addition to tours of the Image Display and LANDSAT Processing Facilities during the afternoon of the second day.

  20. Dynamic image fusion and general observer preference

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Doe, Joshua M.

    2010-04-01

    Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.

  1. The moderate resolution imaging spectrometer (MODIS) science and data system requirements

    NASA Technical Reports Server (NTRS)

    Ardanuy, Philip E.; Han, Daesoo; Salomonson, Vincent V.

    1991-01-01

    The Moderate Resolution Imaging Spectrometer (MODIS) has been designated as a facility instrument on the first NASA polar orbiting platform as part of the Earth Observing System (EOS) and is scheduled for launch in the late 1990s. The near-global daily coverage of MODIS, combined with its continuous operation, broad spectral coverage, and relatively high spatial resolution, makes it central to the objectives of EOS. The development, implementation, production, and validation of the core MODIS data products define a set of functional, performance, and operational requirements on the data system that operate between the sensor measurements and the data products supplied to the user community. The science requirements guiding the processing of MODIS data are reviewed, and the aspects of an operations concept for the production of data products from MODIS for use by the scientific community are discussed.

  2. Improved detection of soma location and morphology in fluorescence microscopy images of neurons.

    PubMed

    Kayasandik, Cihan Bilge; Labate, Demetrio

    2016-12-01

    Automated detection and segmentation of somas in fluorescent images of neurons is a major goal in quantitative studies of neuronal networks, including applications of high-content-screenings where it is required to quantify multiple morphological properties of neurons. Despite recent advances in image processing targeted to neurobiological applications, existing algorithms of soma detection are often unreliable, especially when processing fluorescence image stacks of neuronal cultures. In this paper, we introduce an innovative algorithm for the detection and extraction of somas in fluorescent images of networks of cultured neurons where somas and other structures exist in the same fluorescent channel. Our method relies on a new geometrical descriptor called Directional Ratio and a collection of multiscale orientable filters to quantify the level of local isotropy in an image. To optimize the application of this approach, we introduce a new construction of multiscale anisotropic filters that is implemented by separable convolution. Extensive numerical experiments using 2D and 3D confocal images show that our automated algorithm reliably detects somas, accurately segments them, and separates contiguous ones. We include a detailed comparison with state-of-the-art existing methods to demonstrate that our algorithm is extremely competitive in terms of accuracy, reliability and computational efficiency. Our algorithm will facilitate the development of automated platforms for high content neuron image processing. A Matlab code is released open-source and freely available to the scientific community. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

  4. Automated design of image operators that detect interest points.

    PubMed

    Trujillo, Leonardo; Olague, Gustavo

    2008-01-01

    This work describes how evolutionary computation can be used to synthesize low-level image operators that detect interesting points on digital images. Interest point detection is an essential part of many modern computer vision systems that solve tasks such as object recognition, stereo correspondence, and image indexing, to name but a few. The design of the specialized operators is posed as an optimization/search problem that is solved with genetic programming (GP), a strategy still mostly unexplored by the computer vision community. The proposed approach automatically synthesizes operators that are competitive with state-of-the-art designs, taking into account an operator's geometric stability and the global separability of detected points during fitness evaluation. The GP search space is defined using simple primitive operations that are commonly found in point detectors proposed by the vision community. The experiments described in this paper extend previous results (Trujillo and Olague, 2006a,b) by presenting 15 new operators that were synthesized through the GP-based search. Some of the synthesized operators can be regarded as improved manmade designs because they employ well-known image processing techniques and achieve highly competitive performance. On the other hand, since the GP search also generates what can be considered as unconventional operators for point detection, these results provide a new perspective to feature extraction research.

  5. New Processing of Spaceborne Imaging Radar-C (SIR-C) Data

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Gracheva, V.; Arko, S. A.; Labelle-Hamer, A. L.

    2017-12-01

    The Spaceborne Imaging Radar-C (SIR-C) was a radar system, which successfully operated on two separate shuttle missions in April and October 1994. During these two missions, a total of 143 hours of radar data were recorded. SIR-C was the first multifrequency and polarimetric spaceborne radar system, operating in dual frequency (L- and C- band) and with quad-polarization. SIR-C had a variety of different operating modes, which are innovative even from today's point of view. Depending on the mode, it was possible to acquire data with different polarizations and carrier frequency combinations. Additionally, different swaths and bandwidths could be used during the data collection and it was possible to receive data with two antennas in the along-track direction.The United States Geological Survey (USGS) distributes the synthetic aperture radar (SAR) images as single-look complex (SLC) and multi-look complex (MLC) products. Unfortunately, since June 2005 the SIR-C processor has been inoperable and not repairable. All acquired SLC and MLC images were processed with a course resolution of 100 m with the goal of generating a quick look. These images are however not well suited for scientific analysis. Only a small percentage of the acquired data has been processed as full resolution SAR images and the unprocessed high resolution data cannot be processed any more at the moment.At the Alaska Satellite Facility (ASF) a new processor was developed to process binary SIR-C data to full resolution SAR images. ASF is planning to process the entire recoverable SIR-C archive to full resolution SLCs, MLCs and high resolution geocoded image products. ASF will make these products available to the science community through their existing data archiving and distribution system.The final paper will describe the new processor and analyze the challenges of reprocessing the SIR-C data.

  6. Compound image segmentation of published biomedical figures.

    PubMed

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  7. NASA sea ice and snow validation plan for the Defense Meteorological Satellite Program special sensor microwave/imager

    NASA Technical Reports Server (NTRS)

    Cavalieri, Donald J. (Editor); Swift, Calvin T. (Editor)

    1987-01-01

    This document addresses the task of developing and executing a plan for validating the algorithm used for initial processing of sea ice data from the Special Sensor Microwave/Imager (SSMI). The document outlines a plan for monitoring the performance of the SSMI, for validating the derived sea ice parameters, and for providing quality data products before distribution to the research community. Because of recent advances in the application of passive microwave remote sensing to snow cover on land, the validation of snow algorithms is also addressed.

  8. Digitizing the KSO white light images

    NASA Astrophysics Data System (ADS)

    Pötzi, W.

    From 1989 up to 2007 the Sun was observed at the Kanzelhöhe Observatory in white light on photographic film material. The images are on transparent sheet films and are not available to the scientific community now. With a photo scanner for transparent film material the films are now scanned and then prepared for scientific use. The programs for post processing are already finished and as an output FITS and JPEG-files are produced. The scanning should be finished end of 2011 and the data should then be available via our homepage.

  9. A pyroelectric thermal imaging system for use in medical diagnosis.

    PubMed

    Black, C M; Clark, R P; Darton, K; Goff, M R; Norman, T D; Spikes, H A

    1990-07-01

    The value of infra-red thermography in a number of pathologies, notably rheumatology and vascular diseases, is becoming well established. However, the high cost of thermal scanners and the associated image processing computers has been a limitation to the widespread availability of this technique to the clinical community. This paper describes a relatively inexpensive thermographic system based on a pyroelectric vidicon scanner and a microcomputer. Software has been written with particular reference to the use of thermography in rheumatoid arthritis and vasospastic conditions such as Raynaud's phenomenon.

  10. BOREAS Level-0 ER-2 Navigation Data

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Dominguez, Roseanne; Newcomer, Jeffrey A.; Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS Staff Science effort covered those activities that were BOREAS community-level activities or required uniform data collection procedures across sites and time. These activities included the acquisition, processing, and archiving of aircraft navigation/attitude data to complement the digital image data. The level-0 ER-2 navigation data files contain aircraft attitude and position information acquired during the digital image and photographic data collection missions. Temporally, the data were acquired from April to September 1994. Data were recorded at intervals of 5 seconds. The data are stored in tabular ASCII files.

  11. Exploitation of commercial remote sensing images: reality ignored?

    NASA Astrophysics Data System (ADS)

    Allen, Paul C.

    1999-12-01

    The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.

  12. DOCLIB: a software library for document processing

    NASA Astrophysics Data System (ADS)

    Jaeger, Stefan; Zhu, Guangyu; Doermann, David; Chen, Kevin; Sampat, Summit

    2006-01-01

    Most researchers would agree that research in the field of document processing can benefit tremendously from a common software library through which institutions are able to develop and share research-related software and applications across academic, business, and government domains. However, despite several attempts in the past, the research community still lacks a widely-accepted standard software library for document processing. This paper describes a new library called DOCLIB, which tries to overcome the drawbacks of earlier approaches. Many of DOCLIB's features are unique either in themselves or in their combination with others, e.g. the factory concept for support of different image types, the juxtaposition of image data and metadata, or the add-on mechanism. We cherish the hope that DOCLIB serves the needs of researchers better than previous approaches and will readily be accepted by a larger group of scientists.

  13. Academic-Community Hospital Comparison of Vulnerabilities in Door-to-Needle Process for Acute Ischemic Stroke.

    PubMed

    Prabhakaran, Shyam; Khorzad, Rebeca; Brown, Alexandra; Nannicelli, Anna P; Khare, Rahul; Holl, Jane L

    2015-10-01

    Although best practices have been developed for achieving door-to-needle (DTN) times ≤60 minutes for stroke thrombolysis, critical DTN process failures persist. We sought to compare these failures in the Emergency Department at an academic medical center and a community hospital. Failure modes effects and criticality analysis was used to identify system and process failures. Multidisciplinary teams involved in DTN care participated in moderated sessions at each site. As a result, DTN process maps were created and potential failures and their causes, frequency, severity, and existing safeguards were identified. For each failure, a risk priority number and criticality score were calculated; failures were then ranked, with the highest scores representing the most critical failures and targets for intervention. We detected a total of 70 failures in 50 process steps and 76 failures in 42 process steps at the community hospital and academic medical center, respectively. At the community hospital, critical failures included (1) delay in registration because of Emergency Department overcrowding, (2) incorrect triage diagnosis among walk-in patients, and (3) delay in obtaining consent for thrombolytic treatment. At the academic medical center, critical failures included (1) incorrect triage diagnosis among walk-in patients, (2) delay in stroke team activation, and (3) delay in obtaining computed tomographic imaging. Although the identification of common critical failures suggests opportunities for a generalizable process redesign, differences in the criticality and nature of failures must be addressed at the individual hospital level, to develop robust and sustainable solutions to reduce DTN time. © 2015 American Heart Association, Inc.

  14. Scale dependency of forest functional diversity assessed using imaging spectroscopy and airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Schneider, F. D.; Morsdorf, F.; Schmid, B.; Petchey, O. L.; Hueni, A.; Schimel, D.; Schaepman, M. E.

    2016-12-01

    Forest functional traits offer a mechanistic link between ecological processes and community structure and assembly rules. However, measuring functional traits of forests in a continuous and consistent way is particularly difficult due to the complexity of in-situ measurements and geo-referencing. New imaging spectroscopy measurements overcome these limitations allowing to map physiological traits on broad spatial scales. We mapped leaf chlorophyll, carotenoids and leaf water content over 900 ha of temperate mixed forest (Fig. 1a). The selected traits are functionally important because they are indicating the photosynthetic potential of trees, leaf longevity and protection, as well as tree water and drought stress. Spatially continuous measurements on the scale of individual tree crowns allowed to assess functional diversity patterns on a range of ecological extents. We used indexes of functional richness, divergence and evenness to map different aspects of diversity. Fig. 1b shows an example of physiological richness at an extent of 240 m radius. We compared physiological to morphological diversity patterns, derived based on plant area index, canopy height and foliage height diversity. Our results show that patterns of physiological and morphological diversity generally agree, independently measured by airborne imaging spectroscopy and airborne laser scanning, respectively. The occurrence of disturbance areas and mixtures of broadleaf and needle trees were the main drivers of the observed diversity patterns. Spatial patterns at varying extents and richness-area relationships indicated that environmental filtering is the predominant community assembly process. Our results demonstrate the potential for mapping physiological and morphological diversity in a temperate mixed forest between and within species on scales relevant to study community assembly and structure from space and test the corresponding measurement schemes.

  15. High-dynamic-range imaging for cloud segmentation

    NASA Astrophysics Data System (ADS)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  16. A 'user friendly' geographic information system in a color interactive digital image processing system environment

    NASA Technical Reports Server (NTRS)

    Campbell, W. J.; Goldberg, M.

    1982-01-01

    NASA's Eastern Regional Remote Sensing Applications Center (ERRSAC) has recognized the need to accommodate spatial analysis techniques in its remote sensing technology transfer program. A computerized Geographic Information System to incorporate remotely sensed data, specifically Landsat, with other relevant data was considered a realistic approach to address a given resource problem. Questions arose concerning the selection of a suitable available software system to demonstrate, train, and undertake demonstration projects with ERRSAC's user community. The very specific requirements for such a system are discussed. The solution found involved the addition of geographic information processing functions to the Interactive Digital Image Manipulation System (IDIMS). Details regarding the functions of the new integrated system are examined along with the characteristics of the software.

  17. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  18. A system for optimal edging and trimming of rough hardwood lumber

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt; Philip A. Araman

    2003-01-01

    Despite the importance of improving lumber processing early in manufacturing, scanning of unplaned, green hardwood lumber has received relatively little attention in the research community. This has been due in part to the difficulty of clearly imaging fresh-cut boards whose fibrous surfaces mask many wood features. This paper describes a prototype system that scans...

  19. Automatic scanning of rough hardwood lumber for edging and trimming

    Treesearch

    A. Lynn Abbott; Daniel L. Schmoldt; Philip A. Araman; Sang-Mook Lee

    2001-01-01

    Scanning of unplaned, green hardwood lumber has received relatively little attention in the research community. This has been due in part to the difficulty of clearly imaging fresh-cut boards whose fibrous surfaces mask many wood features. Nevertheless, it is important to improve lumber processing early in the manufacturing stream because much wood material is...

  20. NASA IKONOS Radiometric Characterization

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Frisbee, Troy; Zanoni, Vicki; Blonski, Slawek; Daehler, Erik; Grant, Brennan; Holekamp, Kara; Ryan, Robert; Sellers, Richard; Smith, Charles

    2002-01-01

    The objective of this program: Perform radiometric vicarious calibrations of IKQNOS imagery and compare with Space Imaging calibration coefficients The approach taken: utilize multiple well-characterized sites which are widely used by the NASA science community for radiometric characterization of airborne and spaceborne sensors; and to Perform independent characterizations with independent teams. Each team has slightly different measurement techniques and data processing methods.

  1. The Principal's Role in Marketing the School: Subjective Interpretations and Personal Influences

    ERIC Educational Resources Information Center

    Oplatka, Izhar

    2007-01-01

    The literature on educational marketing to date has been concerned with the ways by which schools market and promote themselves in the community, their strategies to maintain and enhance their image, and the factors affecting parents and children and the processes they undergo when choosing their junior high and high school. Yet, there remains a…

  2. Storytellers: The Image of the Two-Year College in American Fiction and in Women's Journals.

    ERIC Educational Resources Information Center

    LaPaglia, Nancy

    Finding that community colleges and their female students are rarely and disparagingly depicted in fiction motivated this study of the image of community colleges in literature, movies, and television. The study also sought to compare this image with that emerging from the journal entries of 23 women community college students and 14 faculty…

  3. Technical note: DIRART--A software suite for deformable image registration and adaptive radiotherapy research.

    PubMed

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A

    2011-01-01

    Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-

  4. Parallel excitation-emission multiplexed fluorescence lifetime confocal microscopy for live cell imaging.

    PubMed

    Zhao, Ming; Li, Yu; Peng, Leilei

    2014-05-05

    We present a novel excitation-emission multiplexed fluorescence lifetime microscopy (FLIM) method that surpasses current FLIM techniques in multiplexing capability. The method employs Fourier multiplexing to simultaneously acquire confocal fluorescence lifetime images of multiple excitation wavelength and emission color combinations at 44,000 pixels/sec. The system is built with low-cost CW laser sources and standard PMTs with versatile spectral configuration, which can be implemented as an add-on to commercial confocal microscopes. The Fourier lifetime confocal method allows fast multiplexed FLIM imaging, which makes it possible to monitor multiple biological processes in live cells. The low cost and compatibility with commercial systems could also make multiplexed FLIM more accessible to biological research community.

  5. Automated processing of zebrafish imaging data: a survey.

    PubMed

    Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-09-01

    Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.

  6. Automated Processing of Zebrafish Imaging Data: A Survey

    PubMed Central

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  7. Saliency-aware food image segmentation for personal dietary assessment using a wearable computer

    PubMed Central

    Chen, Hsin-Chen; Jia, Wenyan; Sun, Xin; Li, Zhaoxin; Li, Yuecheng; Fernstrom, John D.; Burke, Lora E.; Baranowski, Thomas; Sun, Mingui

    2015-01-01

    Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods. PMID:26257473

  8. Body image in the person with a stoma.

    PubMed

    Cohen, A

    1991-01-01

    Body image is the mental picture one has of one's physical being that develops from birth and continues throughout life and is related to different factors affecting its formation and dynamics. A crisis such as the creation of a stoma leads to an alteration in body image and an awareness of the meaning of the change in appearance and function of an individual. The individual's behavior is examined in several domains: physical, mental, emotional, social, sexual, and economical. When one domain is disturbed the others will be influenced. A person's rehabilitation after ostomy surgery is a continuous process of adaptation and is directed toward returning to a normal way of life. Many factors affect this adaptation to an alteration in body image and are relevant to the patient and family. These factors include, but are not limited to, the disease process, treatment(s), and medical and nursing care in the hospital and community. Knowledge about actual and potential problems associated with an alteration in body image enables the nurse to assess the meaning of the alteration in body image for the individual patient and family, provide counseling before and after the surgery, and intervene so that the individual will be able to adapt to an alteration in body image and return to one's previous activities of daily living and lifestyle.

  9. Saliency-aware food image segmentation for personal dietary assessment using a wearable computer

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Jia, Wenyan; Sun, Xin; Li, Zhaoxin; Li, Yuecheng; Fernstrom, John D.; Burke, Lora E.; Baranowski, Thomas; Sun, Mingui

    2015-02-01

    Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods.

  10. Label-free molecular imaging of bacterial communities of the opportunistic pathogen Pseudomonas aeruginosa

    NASA Astrophysics Data System (ADS)

    Baig, Nameera; Polisetti, Sneha; Morales-Soto, Nydia; Dunham, Sage J. B.; Sweedler, Jonathan V.; Shrout, Joshua D.; Bohn, Paul W.

    2016-09-01

    Biofilms, such as those formed by the opportunistic human pathogen Pseudomonas aeruginosa are complex, matrix enclosed, and surface-associated communities of cells. Bacteria that are part of a biofilm community are much more resistant to antibiotics and the host immune response than their free-floating counterparts. P. aeruginosa biofilms are associated with persistent and chronic infections in diseases such as cystic fibrosis and HIV-AIDS. P. aeruginosa synthesizes and secretes signaling molecules such as the Pseudomonas quinolone signal (PQS) which are implicated in quorum sensing (QS), where bacteria regulate gene expression based on population density. Processes such as biofilms formation and virulence are regulated by QS. This manuscript describes the powerful molecular imaging capabilities of confocal Raman microscopy (CRM) and surface enhanced Raman spectroscopy (SERS) in conjunction with multivariate statistical tools such as principal component analysis (PCA) for studying the spatiotemporal distribution of signaling molecules, secondary metabolites and virulence factors in biofilm communities of P. aeruginosa. Our observations reveal that the laboratory strain PAO1C synthesizes and secretes 2-alkyl-4-hydroxyquinoline N-oxides and 2-alkyl-4-hydroxyquinolones in high abundance, while the isogenic acyl homoserine lactone QS-deficient mutant (ΔlasIΔrhlI) strain produces predominantly 2-alkyl-quinolones during biofilm formation. This study underscores the use of CRM, along with traditional biological tools such as genetics, for studying the behavior of microbial communities at the molecular level.

  11. Drawing and interpreting data: Children's impressions of onchocerciasis and community-directed treatment with ivermectin (CDTI) in four onchocerciasis endemic countries in Africa

    PubMed Central

    Amuyunzu-Nyamongo, Mary; Tchounkeu, Yolande Flore Longang; Oyugi, Rahel Akumu; Kabali, Asaph Turinde; Okeibunor, Joseph C.; Manianga, Cele; Amazigo, Uche V.

    2011-01-01

    Although the depiction of a child leading a blind man is the most enduring image of onchocerciasis in Africa, research activities have hardly involved children. This paper aims at giving voice to children through drawings and their interpretation. The study was conducted in 2009 in Cameroon, Democratic Republic of Congo (DRC), Nigeria and Uganda. Children aged 6–16 years were asked to draw their perceptions of onchocerciasis and community-directed treatment with ivermectin (CDTI) in their communities. A total of 50 drawings were generated. The drawings depicted four main aspects of onchocerciasis: (1) the disease symptoms, (2) the negative consequences of onchocerciasis among children and in the community generally, (3) the ivermectin distribution process, and (4) the benefits or effects of taking ivermectin. Out of the 50 drawings, 30 were on symptoms, 7 on effects of the disease on children, 8 on distribution process, and 5 represented multiple perceptions on symptoms, drug distribution processes, benefits, and effects of treatment. The lack of clarity when treatment with ivermectin can be stopped in endemic areas requires working with children to ensure continued compliance with treatment into the future. Children's drawings should be incorporated into health education interventions. PMID:21637349

  12. The Advanced Rapid Imaging and Analysis (ARIA) Project: Providing Standard and On-Demand SAR products for Hazard Science and Hazard Response

    NASA Astrophysics Data System (ADS)

    Owen, S. E.; Hua, H.; Rosen, P. A.; Agram, P. S.; Webb, F.; Simons, M.; Yun, S. H.; Sacco, G. F.; Liu, Z.; Fielding, E. J.; Lundgren, P.; Moore, A. W.

    2017-12-01

    A new era of geodetic imaging arrived with the launch of the ESA Sentinel-1A/B satellites in 2014 and 2016, and with the 2016 confirmation of the NISAR mission, planned for launch in 2021. These missions assure high quality, freely and openly distributed regularly sampled SAR data into the indefinite future. These unprecedented data sets are a watershed for solid earth sciences as we progress towards the goal of ubiquitous InSAR measurements. We now face the challenge of how to best address the massive volumes of data and intensive processing requirements. Should scientists individually process the same data independently themselves? Should a centralized service provider create standard products that all can use? Are there other approaches to accelerate science that are cost effective and efficient? The Advanced Rapid Imaging and Analysis (ARIA) project, a joint venture co-sponsored by California Institute of Technology (Caltech) and by NASA through the Jet Propulsion Laboratory (JPL), is focused on rapidly generating higher level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. However, there are challenges in defining the optimal InSAR data products for the solid earth science community. In this presentation, we will present our experience with InSAR users, our lessons learned the advantages of on demand and standard products, and our proposal for the most effective path forward.

  13. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  14. Digital images are data: and should be treated as such.

    PubMed

    Cromey, Douglas W

    2013-01-01

    The scientific community has become very concerned about inappropriate image manipulation. In journals that check figures after acceptance, 20-25% of the papers contained at least one figure that did not comply with the journal's instructions to authors. The scientific press continues to report a small, but steady stream of cases of fraudulent image manipulation. Inappropriate image manipulation taints the scientific record, damages trust within science, and degrades science's reputation with the general public. Scientists can learn from historians and photojournalists, who have provided a number of examples of attempts to alter or misrepresent the historical record. Scientists must remember that digital images are numerically sampled data that represent the state of a specific sample when examined with a specific instrument. These data should be carefully managed. Changes made to the original data need to be tracked like the protocols used for other experimental procedures. To avoid pitfalls, unexpected artifacts, and unintentional misrepresentation of the image data, a number of image processing guidelines are offered.

  15. Digital Images Are Data: And Should Be Treated as Such

    PubMed Central

    Cromey, Douglas W.

    2014-01-01

    The scientific community has become very concerned about inappropriate image manipulation. In journals that check figures after acceptance, 20–25% of the papers contained at least one figure that did not comply with the journal’s instructions to authors. The scientific press continues to report a small, but steady stream of cases of fraudulent image manipulation. Inappropriate image manipulation taints the scientific record, damages trust within science, and degrades science’s reputation with the general public. Scientists can learn from historians and photojournalists, who have provided a number of examples of attempts to alter or misrepresent the historical record. Scientists must remember that digital images are numerically sampled data that represent the state of a specific sample when examined with a specific instrument. These data should be carefully managed. Changes made to the original data need to be tracked like the protocols used for other experimental procedures. To avoid pitfalls, unexpected artifacts, and unintentional misrepresentation of the image data, a number of image processing guidelines are offered. PMID:23026995

  16. GRI: The Gamma-Ray Imager mission

    NASA Astrophysics Data System (ADS)

    Knödlseder, Jürgen; GRI Consortium

    With the INTEGRAL observatory ESA has provided a unique tool to the astronomical community revealing hundreds of sources, new classes of objects, extraordinary views of antimatter annihilation in our Galaxy, and fingerprints of recent nucleosynthesis processes. While INTEGRAL provides the global overview over the soft gamma-ray sky, there is a growing need to perform deeper, more focused investigations of gamma-ray sources. In soft X-rays a comparable step was taken going from the Einstein and the EXOSAT satellites to the Chandra and XMM/Newton observatories. Technological advances in the past years in the domain of gamma-ray focusing using Laue diffraction have paved the way towards a new gamma-ray mission, providing major improvements regarding sensitivity and angular resolution. Such a future Gamma-Ray Imager will allow studies of particle acceleration processes and explosion physics in unprecedented detail, providing essential clues on the innermost nature of the most violent and most energetic processes in the Universe.

  17. GRI: The Gamma-Ray Imager mission

    NASA Astrophysics Data System (ADS)

    Knödlseder, Jürgen; GRI Consortium

    2006-06-01

    With the INTEGRAL observatory, ESA has provided a unique tool to the astronomical community revealing hundreds of sources, new classes of objects, extraordinary views of antimatter annihilation in our Galaxy, and fingerprints of recent nucleosynthesis processes. While INTEGRAL provides the global overview over the soft gamma-ray sky, there is a growing need to perform deeper, more focused investigations of gamma-ray sources. In soft X-rays a comparable step was taken going from the Einstein and the EXOSAT satellites to the Chandra and XMM/Newton observatories. Technological advances in the past years in the domain of gamma-ray focusing using Laue diffraction have paved the way towards a new gamma-ray mission, providing major improvements regarding sensitivity and angular resolution. Such a future Gamma-Ray Imager will allow the study of particle acceleration processes and explosion physics in unprecedented detail, providing essential clues on the innermost nature of the most violent and most energetic processes in the Universe.

  18. Editorial

    NASA Astrophysics Data System (ADS)

    Burton, Mike

    2015-07-01

    Magmatic degassing plays a key role in the dynamics of volcanic activity and also in contributing to the carbon, water and sulphur volatile cycles on Earth. Quantifying the fluxes of magmatic gas emitted from volcanoes is therefore of fundamental importance in Earth Science. This has been recognised since the beginning of modern volcanology, with initial measurements of volcanic SO2 flux being conducted with COrrelation SPECtrometer instruments from the late seventies. While COSPEC measurements continue today, they have been largely superseded by compact grating spectrometers, which were first introduced soon after the start of the 21st Century. Since 2006, a new approach to measuring fluxes has appeared, that of quantitative imaging of the SO2 slant column amount in a volcanic plume. Quantitative imaging of volcanic plumes has created new opportunities and challenges, and in April 2013 an ESF-funded MeMoVolC workshop was held, with the objectives of bringing together the main research groups, create a vibrant, interconnected, community, and examine the current state of the art of this new research frontier. This special issue of sixteen papers within the Journal of Volcanology and Geothermal Research is the direct result of the discussions, intercomparisons and results reported in that workshop. The papers report on the volcanological objectives of the plume imaging community, the state of the art of the technology used, intercomparisons, validations, novel methods and results from field applications. Quantitative plume imaging of volcanic plumes is achieved by using both infrared and ultraviolet wavelengths, with each wavelength offering a different trade-off of strengths and weaknesses, and the papers in this issue reflect this wavelength flexibility. Gas compositions can also be imaged, and this approach offers much promise in the quantification of chemical processing within plumes. One of the key advantages of the plume imaging approach is that we can achieve gas flux measurements at 1-10 Hz frequencies, allowing direct comparisons with geophysical measurements, opening new, interdisciplinary opportunities to deepen our understanding of volcanological processes. Several challenges still can be improved upon, such as dealing with light scattering issues and full automation of data processing. However, it is clear that quantitative plume imaging will have a lasting and profound impact on how volcano observatories operate, our ability to forecast and manage volcanic eruptions, our constraints of global volcanic gas fluxes, and on our understanding of magma dynamics.

  19. Light microscopy applications in systems biology: opportunities and challenges

    PubMed Central

    2013-01-01

    Biological systems present multiple scales of complexity, ranging from molecules to entire populations. Light microscopy is one of the least invasive techniques used to access information from various biological scales in living cells. The combination of molecular biology and imaging provides a bottom-up tool for direct insight into how molecular processes work on a cellular scale. However, imaging can also be used as a top-down approach to study the behavior of a system without detailed prior knowledge about its underlying molecular mechanisms. In this review, we highlight the recent developments on microscopy-based systems analyses and discuss the complementary opportunities and different challenges with high-content screening and high-throughput imaging. Furthermore, we provide a comprehensive overview of the available platforms that can be used for image analysis, which enable community-driven efforts in the development of image-based systems biology. PMID:23578051

  20. Community archiving of imaging studies

    NASA Astrophysics Data System (ADS)

    Fritz, Steven L.; Roys, Steven R.; Munjal, Sunita

    1996-05-01

    The quantity of image data created in a large radiology practice has long been a challenge for available archiving technology. Traditional methods ofarchiving the large quantity of films generated in radiology have relied on warehousing in remote sites, with courier delivery of film files for historical comparisons. A digital community archive, accessible via a wide area network, represents a feasible solution to the problem of archiving digital images from a busy practice. In addition, it affords a physician caring for a patient access to imaging studies performed at a variety ofhealthcare institutions without the need to repeat studies. Security problems include both network security issues in the WAN environment and access control for patient, physician and imaging center. The key obstacle to developing a community archive is currently political. Reluctance to participate in a community archive can be reduced by appropriate design of the access mechanisms.

  1. Daylight coloring for monochrome infrared imagery

    NASA Astrophysics Data System (ADS)

    Gabura, James

    2015-05-01

    The effectiveness of infrared imagery in poor visibility situations is well established and the range of applications is expanding as we enter a new era of inexpensive thermal imagers for mobile phones. However there is a problem in that the counterintuitive reflectance characteristics of various common scene elements can cause slowed reaction times and impaired situational awareness-consequences that can be especially detrimental in emergency situations. While multiband infrared sensors can be used, they are inherently more costly. Here we propose a technique for adding a daylight color appearance to single band infrared images, using the normally overlooked property of local image texture. The simple method described here is illustrated with colorized images from the visible red and long wave infrared bands. Our colorizing process not only imparts a natural daylight appearance to infrared images but also enhances the contrast and visibility of otherwise obscure detail. We anticipate that this colorizing method will lead to a better user experience, faster reaction times and improved situational awareness for a growing community of infrared camera users. A natural extension of our process could expand upon its texture discerning feature by adding specialized filters for discriminating specific targets.

  2. Characterizing canopy biochemistry from imaging spectroscopy and its application to ecosystem studies

    USGS Publications Warehouse

    Kokaly, R.F.; Asner, Gregory P.; Ollinger, S.V.; Martin, M.E.; Wessman, C.A.

    2009-01-01

    For two decades, remotely sensed data from imaging spectrometers have been used to estimate non-pigment biochemical constituents of vegetation, including water, nitrogen, cellulose, and lignin. This interest has been motivated by the important role that these substances play in physiological processes such as photosynthesis, their relationships with ecosystem processes such as litter decomposition and nutrient cycling, and their use in identifying key plant species and functional groups. This paper reviews three areas of research to improve the application of imaging spectrometers to quantify non-pigment biochemical constituents of plants. First, we examine recent empirical and modeling studies that have advanced our understanding of leaf and canopy reflectance spectra in relation to plant biochemistry. Next, we present recent examples of how spectroscopic remote sensing methods are applied to characterize vegetation canopies, communities and ecosystems. Third, we highlight the latest developments in using imaging spectrometer data to quantify net primary production (NPP) over large geographic areas. Finally, we discuss the major challenges in quantifying non-pigment biochemical constituents of plant canopies from remotely sensed spectra.

  3. Divers-Operated Underwater Photogrammetry: Applications in the Study of Antarctic Benthos

    NASA Astrophysics Data System (ADS)

    Piazza, P.; Cummings, V.; Lohrer, D.; Marini, S.; Marriott, P.; Menna, F.; Nocerino, E.; Peirano, A.; Schiaparelli, S.

    2018-05-01

    Ecological studies about marine benthic communities received a major leap from the application of a variety of non-destructive sampling and mapping techniques based on underwater image and video recording. The well-established scientific diving practice consists in the acquisition of single path or `round-trip' over elongated transects, with the imaging device oriented in a nadir looking direction. As it may be expected, the application of automatic image processing procedures to data not specifically acquired for 3D modelling can be risky, especially if proper tools for assessing the quality of the produced results are not employed. This paper, born from an international cooperation, focuses on this topic, which is of great interest for ecological and monitoring benthic studies in Antarctica. Several video footages recorded from different scientific teams in different years are processed with an automatic photogrammetric procedure and salient statistical features are reported to critically analyse the derived results. As expected, the inclusion of oblique images from additional lateral strips may improve the expected accuracy in the object space, without altering too much the current video recording practices.

  4. CFHT data processing and calibration ESPaDOnS pipeline: Upena and OPERA (optical spectropolarimetry)

    NASA Astrophysics Data System (ADS)

    Martioli, Eder; Teeple, D.; Manset, Nadine

    2011-03-01

    CFHT is ESPaDOnS responsible for processing raw images, removing instrument related artifacts, and delivering science-ready data to the PIs. Here we describe the Upena pipeline, which is the software used to reduce the echelle spectro-polarimetric data obtained with the ESPaDOnS instrument. Upena is an automated pipeline that performs calibration and reduction of raw images. Upena has the capability of both performing real-time image-by-image basis reduction and a post observing night complete reduction. Upena produces polarization and intensity spectra in FITS format. The pipeline is designed to perform parallel computing for improved speed, which assures that the final products are delivered to the PIs before noon HST after each night of observations. We also present the OPERA project, which is an open-source pipeline to reduce ESPaDOnS data that will be developed as a collaborative work between CFHT and the scientific community. OPERA will match the core capabilities of Upena and in addition will be open-source, flexible and extensible.

  5. The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Good, J. C.

    2017-05-01

    The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.

  6. We get the algorithms of our ground truths: Designing referential databases in digital image processing

    PubMed Central

    Jaton, Florian

    2017-01-01

    This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802

  7. AIRSAR Web-Based Data Processing

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne

    2007-01-01

    The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.

  8. Improving safety in CT through the use of educational media.

    PubMed

    Mattingly, Melisa

    2011-01-01

    With a grant from the AHRA and Toshiba Putting Patients First program, Community Hospital in Indianapolis, IN set out to reduce the need for patient sedation, mechanical restraint, additional radiation dosage,and repeat procedures for pediatric patients. An online video was produced to educate pediatric patients and their caregivers about the diagnostic imaging process enabling them to be more comfortable and compliant during the procedure. Early information and results indicate a safer experience for the patient.The goal is for the video to become a new best practice tool for improving patient care and safety in diagnostic imaging.

  9. Development of Very Long Baseline Interferometry (VLBI) techniques in New Zealand: Array simulation, image synthesis and analysis

    NASA Astrophysics Data System (ADS)

    Weston, S. D.

    2008-04-01

    This thesis presents the design and development of a process to model Very Long Base Line Interferometry (VLBI) aperture synthesis antenna arrays. In line with the Auckland University of Technology (AUT) Institute for Radiophysics and Space Research (IRSR) aims to develop the knowledge, skills and experience within New Zealand, extensive use of existing radio astronomical software has been incorporated into the process namely AIPS (Astronomical Imaging Processing System), MIRIAD (a radio interferometry data reduction package) and DIFMAP (a program for synthesis imaging of visibility data from interferometer arrays of radio telescopes). This process has been used to model various antenna array configurations for two proposed New Zealand sites for antenna in a VLBI array configuration with existing Australian facilities and a passable antenna at Scott Base in Antarctica; and the results are presented in an attempt to demonstrate the improvement to be gained by joint trans-Tasman VLBI observation. It is hoped these results and process will assist the planning and placement of proposed New Zealand radio telescopes for cooperation with groups such as the Australian Long Baseline Array (LBA), others in the Pacific Rim and possibly globally; also potential future involvement of New Zealand with the SKA. The developed process has also been used to model a phased building schedule for the SKA in Australia and the addition of two antennas in New Zealand. This has been presented to the wider astronomical community via the Royal Astronomical Society of New Zealand Journal, and is summarized in this thesis with some additional material. A new measure of quality ("figure of merit") for comparing the original model image and final CLEAN images by utilizing normalized 2-D cross correlation is evaluated as an alternative to the existing subjective visual operator image comparison undertaken to date by other groups. This new unit of measure is then used ! in the presentation of the results to provide a quantative comparison of the different array configurations modelled. Included in the process is the development of a new antenna array visibility program which was based on a Perl code script written by Prof Steven Tingay to plot antenna visibilities for the Australian Square Kilometre Array (SKA) proposal. This has been expanded and improved removing the hard coded fixed assumptions for the SKA configuration, providing a new useful and flexible program for the wider astronomical community. A prototype user interface using html/cgi/perl was developed for the process so that the underlying software packages can be served over the web to a user via an internet browser. This was used to demonstrate how easy it is to provide a friendlier interface compared to the existing cumbersome and difficult command line driven interfaces (although the command line can be retained for more experienced users).

  10. The Java Image Science Toolkit (JIST) for rapid prototyping and publishing of neuroimaging software.

    PubMed

    Lucas, Blake C; Bogovic, John A; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L; Pham, Dzung L; Landman, Bennett A

    2010-03-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC).

  11. The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software

    PubMed Central

    Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung

    2010-01-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162

  12. Advanced medical imaging protocol workflow-a flexible electronic solution to optimize process efficiency, care quality and patient safety in the National VA Enterprise.

    PubMed

    Medverd, Jonathan R; Cross, Nathan M; Font, Frank; Casertano, Andrew

    2013-08-01

    Radiologists routinely make decisions with only limited information when assigning protocol instructions for the performance of advanced medical imaging examinations. Opportunity exists to simultaneously improve the safety, quality and efficiency of this workflow through the application of an electronic solution leveraging health system resources to provide concise, tailored information and decision support in real-time. Such a system has been developed using an open source, open standards design for use within the Veterans Health Administration. The Radiology Protocol Tool Recorder (RAPTOR) project identified key process attributes as well as inherent weaknesses of paper processes and electronic emulators of paper processes to guide the development of its optimized electronic solution. The design provides a kernel that can be expanded to create an integrated radiology environment. RAPTOR has implications relevant to the greater health care community, and serves as a case model for modernization of legacy government health information systems.

  13. Zooplankton Grazing Effects on Particle Size Spectra under Different Seasonal Conditions

    NASA Astrophysics Data System (ADS)

    Stamieszkin, K.; Poulton, N.; Pershing, A. J.

    2016-02-01

    Oceanic particle size spectra can be used to explain and predict variability in carbon export efficiency, since larger particles are more likely to sink to depth than small particles. The distribution of biogenic particle size in the surface ocean is the result of many variables and processes, including nutrient availability, primary productivity, aggregation, remineralization, and grazing. We conducted a series of grazing experiments to test the hypothesis that mesozooplankton shift particle size spectra toward larger particles, via grazing and egestion of relatively large fecal pellets. These experiments were carried out over several months, and used natural communities of mesozooplankton and their microbial prey, collected offshore of the Damariscotta River in the Gulf of Maine. We analyzed the samples using Fluid Imaging Technologies' FlowCam®, a particle imaging system. With this equipment, we processed live samples, decreasing the likelihood of losing or damaging fragile particles, and thereby lessening sources of error in commonly used preservation and enumeration protocols. Our results show how the plankton size spectrum changes as the Gulf of Maine progresses through a seasonal cycle. We explore the relationship of grazing community size structure to its effect on the overall biogenic particle size spectrum. At some times of year, mesozooplankton grazing does not alter the particle size spectrum, while at others it significantly does, affecting the potential for biogenic flux. We also examine prey selectivity, and find that chain diatoms are the only prey group preferentially consumed. Otherwise, we find that complete mesozooplankton communities are "evolved" to fit their prey such that most prey groups are grazed evenly. We discuss a metabolic numerical model which could be used to universalize the relationships between whole gazer and whole microbial communities, with respect to effects on particle size spectra.

  14. PACS 2000: quality control using the task allocation chart

    NASA Astrophysics Data System (ADS)

    Norton, Gary S.; Romlein, John R.; Lyche, David K.; Richardson, Ronald R., Jr.

    2000-05-01

    Medical imaging's technological evolution in the next century will continue to include Picture Archive and Communication Systems (PACS) and teleradiology. It is difficult to predict radiology's future in the new millennium with both computed radiography and direct digital capture competing as the primary image acquisition methods for routine radiography. Changes in Computed Axial Tomography (CT) and Magnetic Resonance Imaging (MRI) continue to amaze the healthcare community. No matter how the acquisition, display, and archive functions change, Quality Control (QC) of the radiographic imaging chain will remain an important step in the imaging process. The Task Allocation Chart (TAC) is a tool that can be used in a medical facility's QC process to indicate the testing responsibilities of the image stakeholders and the medical informatics department. The TAC shows a grid of equipment to be serviced, tasks to be performed, and the organization assigned to perform each task. Additionally, skills, tasks, time, and references for each task can be provided. QC of the PACS must be stressed as a primary element of a PACS' implementation. The TAC can be used to clarify responsibilities during warranty and paid maintenance periods. Establishing a TAC a part of a PACS implementation has a positive affect on patient care and clinical acceptance.

  15. Overcoming the Polyester Image.

    ERIC Educational Resources Information Center

    Regan, Dorothy

    1988-01-01

    Urges community colleges to overcome their image problem by documenting the colleges' impact on their communities. Suggests ways to determine what data should be collected, how to collect the information, and how it can be used to empower faculty, staff, and alumni to change the institution's image. (DMM)

  16. Picturing the Wheatbelt: exploring and expressing place identity through photography.

    PubMed

    Sonn, Christopher C; Quayle, Amy F; Kasat, Pilar

    2015-03-01

    Community arts and cultural development is a process that builds on and responds to the aspirations and needs of communities through creative means. It is participatory and inclusive, and uses multiple modes of representation to produce local knowledge. 'Voices' used photography and photo elicitation as the medium for exploring and expressing sense of place among Aboriginal and non-Indigenous children, young people and adults in four rural towns. An analysis of data generated by the project shows the diverse images that people chose to capture and the different meanings they afforded to their pictures. These meanings reflected individual and collective constructions of place, based on positive experiences and emotions tied to the natural environment and features of the built environment. We discuss community arts and cultural development practice with reference to creative visual methodologies and suggest that it is an approach that can contribute to community psychology's empowerment agenda.

  17. Systems-level analysis of microbial community organization through combinatorial labeling and spectral imaging.

    PubMed

    Valm, Alex M; Mark Welch, Jessica L; Rieken, Christopher W; Hasegawa, Yuko; Sogin, Mitchell L; Oldenbourg, Rudolf; Dewhirst, Floyd E; Borisy, Gary G

    2011-03-08

    Microbes in nature frequently function as members of complex multitaxon communities, but the structural organization of these communities at the micrometer level is poorly understood because of limitations in labeling and imaging technology. We report here a combinatorial labeling strategy coupled with spectral image acquisition and analysis that greatly expands the number of fluorescent signatures distinguishable in a single image. As an imaging proof of principle, we first demonstrated visualization of Escherichia coli labeled by fluorescence in situ hybridization (FISH) with 28 different binary combinations of eight fluorophores. As a biological proof of principle, we then applied this Combinatorial Labeling and Spectral Imaging FISH (CLASI-FISH) strategy using genus- and family-specific probes to visualize simultaneously and differentiate 15 different phylotypes in an artificial mixture of laboratory-grown microbes. We then illustrated the utility of our method for the structural analysis of a natural microbial community, namely, human dental plaque, a microbial biofilm. We demonstrate that 15 taxa in the plaque community can be imaged simultaneously and analyzed and that this community was dominated by early colonizers, including species of Streptococcus, Prevotella, Actinomyces, and Veillonella. Proximity analysis was used to determine the frequency of inter- and intrataxon cell-to-cell associations which revealed statistically significant intertaxon pairings. Cells of the genera Prevotella and Actinomyces showed the most interspecies associations, suggesting a central role for these genera in establishing and maintaining biofilm complexity. The results provide an initial systems-level structural analysis of biofilm organization.

  18. A data base of ASAS digital imagery. [Advanced Solid-state Array Spectroradiometer

    NASA Technical Reports Server (NTRS)

    Irons, James R.; Meeson, Blanche W.; Dabney, Philip W.; Kovalick, William M.; Graham, David W.; Hahn, Daniel S.

    1992-01-01

    The Advanced Solid-State Array Spectroradiometer (ASAS) is an airborne, off-nadir tilting, imaging spectroradiometer that acquires digital image data for 29 spectral bands in the visible and near-infrared. The sensor is used principally for studies of the bidirectional distribution of solar radiation scattered by terrestial surfaces. ASAS has acquired data for a number of terrestial ecosystem field experiments and investigators have received over 170 radiometrically corrected, multiangle, digital image data sets. A database of ASAS digital imagery has been established in the Pilot Land Data System (PLDS) at the NASA/Goddard Space Flight Center to provide access to these data by the scientific community. ASAS, its processed data, and the PLDS are described, together with recent improvements to the sensor system.

  19. Parallel excitation-emission multiplexed fluorescence lifetime confocal microscopy for live cell imaging

    PubMed Central

    Zhao, Ming; Li, Yu; Peng, Leilei

    2014-01-01

    We present a novel excitation-emission multiplexed fluorescence lifetime microscopy (FLIM) method that surpasses current FLIM techniques in multiplexing capability. The method employs Fourier multiplexing to simultaneously acquire confocal fluorescence lifetime images of multiple excitation wavelength and emission color combinations at 44,000 pixels/sec. The system is built with low-cost CW laser sources and standard PMTs with versatile spectral configuration, which can be implemented as an add-on to commercial confocal microscopes. The Fourier lifetime confocal method allows fast multiplexed FLIM imaging, which makes it possible to monitor multiple biological processes in live cells. The low cost and compatibility with commercial systems could also make multiplexed FLIM more accessible to biological research community. PMID:24921725

  20. A compact light-sheet microscope for the study of the mammalian central nervous system

    PubMed Central

    Yang, Zhengyi; Haslehurst, Peter; Scott, Suzanne; Emptage, Nigel; Dholakia, Kishan

    2016-01-01

    Investigation of the transient processes integral to neuronal function demands rapid and high-resolution imaging techniques over a large field of view, which cannot be achieved with conventional scanning microscopes. Here we describe a compact light sheet fluorescence microscope, featuring a 45° inverted geometry and an integrated photolysis laser, that is optimized for applications in neuroscience, in particular fast imaging of sub-neuronal structures in mammalian brain slices. We demonstrate the utility of this design for three-dimensional morphological reconstruction, activation of a single synapse with localized photolysis, and fast imaging of neuronal Ca2+ signalling across a large field of view. The developed system opens up a host of novel applications for the neuroscience community. PMID:27215692

  1. Global Boreal Forest Mapping with JERS-1: North America

    NASA Technical Reports Server (NTRS)

    Williams, Cynthia L.; McDonald, Kyle; Chapman, Bruce

    2000-01-01

    Collaborative effort is underway to map boreal forests worldwide using L-band, single polarization Synthetic Aperture Radar (SAR) imagery from the Japanese Earth Resources (JERS-1) satellite. Final products of the North American Boreal Forest Mapping Project will include two continental scale radar mosaics and supplementary multitemporal mosaics for Alaska, central Canada, and eastern Canada. For selected sites, we are also producing local scale (100 km x 100 km) and regional scale maps (1000 km x 1000 km). As with the nearly completed Amazon component of the Global Rain Forest Mapping project, SAR imagery, radar image mosaics and SAR-derived texture image products will be available to the scientific community on the World Wide Web. Image acquisition for this project has been completed and processing and image interpretation is underway at the Alaska SAR Facility.

  2. Asic developments for radiation imaging applications: The medipix and timepix family

    NASA Astrophysics Data System (ADS)

    Ballabriga, Rafael; Campbell, Michael; Llopart, Xavier

    2018-01-01

    Hybrid pixel detectors were developed to meet the requirements for tracking in the inner layers at the LHC experiments. With low input capacitance per channel (10-100 fF) it is relatively straightforward to design pulse processing readout electronics with input referred noise of ∼ 100 e-rms and pulse shaping times consistent with tagging of events to a single LHC bunch crossing providing clean 'images' of the ionising tracks generated. In the Medipix Collaborations the same concept has been adapted to provide practically noise hit free imaging in a wide range of applications. This paper reports on the development of three generations of readout ASICs. Two distinctive streams of development can be identified: the Medipix ASICs which integrate data from multiple hits on a pixel and provide the images in the form of frames and the Timepix ASICs who aim to send as much information about individual interactions as possible off-chip for further processing. One outstanding circumstance in the use of these devices has been their numerous successful applications, thanks to a large and active community of developers and users. That process has even permitted new developments for detectors for High Energy Physics. This paper reviews the ASICs themselves and details some of the many applications.

  3. Topological visual mapping in robotics.

    PubMed

    Romero, Anna; Cazorla, Miguel

    2012-08-01

    A key problem in robotics is the construction of a map from its environment. This map could be used in different tasks, like localization, recognition, obstacle avoidance, etc. Besides, the simultaneous location and mapping (SLAM) problem has had a lot of interest in the robotics community. This paper presents a new method for visual mapping, using topological instead of metric information. For that purpose, we propose prior image segmentation into regions in order to group the extracted invariant features in a graph so that each graph defines a single region of the image. Although others methods have been proposed for visual SLAM, our method is complete, in the sense that it makes all the process: it presents a new method for image matching; it defines a way to build the topological map; and it also defines a matching criterion for loop-closing. The matching process will take into account visual features and their structure using the graph transformation matching (GTM) algorithm, which allows us to process the matching and to remove out the outliers. Then, using this image comparison method, we propose an algorithm for constructing topological maps. During the experimentation phase, we will test the robustness of the method and its ability constructing topological maps. We have also introduced new hysteresis behavior in order to solve some problems found building the graph.

  4. Understanding Appearance-Enhancing Drug Use in Sport Using an Enactive Approach to Body Image

    PubMed Central

    Hauw, Denis; Bilard, Jean

    2017-01-01

    From an enactive approach to human activity, we suggest that the use of appearance-enhancing drugs is better explained by the sense-making related to body image rather than the cognitive evaluation of social norms about appearance and consequent psychopathology-oriented approach. After reviewing the main psychological disorders thought to link body image issues to the use of appearance-enhancing substances, we sketch a flexible, dynamic and embedded account of body image defined as the individual’s propensity to act and experience in specific situations. We show how this enacted body image is a complex process of sense-making that people engage in when they are trying to adapt to specific situations. These adaptations of the enacted body image require effort, perseverance and time, and therefore any substance that accelerates this process appears to be an easy and attractive solution. In this enactive account of body image, we underline that the link between the enacted body image and substance use is also anchored in the history of the body’s previous interactions with the world. This emerges during periods of upheaval and hardship, especially in a context where athletes experience weak participatory sense-making in a sport community. We conclude by suggesting prevention and intervention designs that would promote a safe instrumental use of the body in sports and psychological helping procedures for athletes experiencing difficulties with substances use and body image. PMID:29238320

  5. A Hitchhiker's Guide to Functional Magnetic Resonance Imaging

    PubMed Central

    Soares, José M.; Magalhães, Ricardo; Moreira, Pedro S.; Sousa, Alexandre; Ganz, Edward; Sampaio, Adriana; Alves, Victor; Marques, Paulo; Sousa, Nuno

    2016-01-01

    Functional Magnetic Resonance Imaging (fMRI) studies have become increasingly popular both with clinicians and researchers as they are capable of providing unique insights into brain functions. However, multiple technical considerations (ranging from specifics of paradigm design to imaging artifacts, complex protocol definition, and multitude of processing and methods of analysis, as well as intrinsic methodological limitations) must be considered and addressed in order to optimize fMRI analysis and to arrive at the most accurate and grounded interpretation of the data. In practice, the researcher/clinician must choose, from many available options, the most suitable software tool for each stage of the fMRI analysis pipeline. Herein we provide a straightforward guide designed to address, for each of the major stages, the techniques, and tools involved in the process. We have developed this guide both to help those new to the technique to overcome the most critical difficulties in its use, as well as to serve as a resource for the neuroimaging community. PMID:27891073

  6. Quantitative multiplex immunohistochemistry reveals myeloid-inflamed tumor-immune complexity associated with poor prognosis

    PubMed Central

    Tsujikawa, Takahiro; Kumar, Sushil; Borkar, Rohan N.; Azimi, Vahid; Thibault, Guillaume; Chang, Young Hwan; Balter, Ariel; Kawashima, Rie; Choe, Gina; Sauer, David; El Rassi, Edward; Clayburgh, Daniel R.; Kulesz-Martin, Molly F.; Lutz, Eric R.; Zheng, Lei; Jaffee, Elizabeth M.; Leyshock, Patrick; Margolin, Adam A.; Mori, Motomi; Gray, Joe W.; Flint, Paul W.; Coussens, Lisa M.

    2017-01-01

    SUMMARY Here we describe a multiplexed immunohistochemical platform, with computational image processing workflows including image cytometry, enabling simultaneous evaluation of 12 biomarkers in one formalin-fixed paraffin-embedded tissue section. To validate this platform, we used tissue microarrays containing 38 archival head and neck squamous cell carcinomas, and revealed differential immune profiles based on lymphoid and myeloid cell densities, correlating with human papilloma virus status and prognosis. Based on these results, we investigated 24 pancreatic ductal adenocarcinomas from patients who received neoadjuvant GVAX vaccination, and revealed that response to therapy correlated with degree of mono-myelocytic cell density, and percentages of CD8+ T cells expressing T cell exhaustion markers. These data highlight the utility of in situ immune monitoring for patient stratification, and provide digital image processing pipelines (https://github.com/multiplexIHC/cppipe) to the community for examining immune complexity in precious tissue sections, where phenotype and tissue architecture are preserved to thus improve biomarker discovery and assessment. PMID:28380359

  7. Content-based quality evaluation of color images: overview and proposals

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine

    2003-12-01

    The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.

  8. Metaproteomics of complex microbial communities in biogas plants

    PubMed Central

    Heyer, Robert; Kohrs, Fabian; Reichl, Udo; Benndorf, Dirk

    2015-01-01

    Production of biogas from agricultural biomass or organic wastes is an important source of renewable energy. Although thousands of biogas plants (BGPs) are operating in Germany, there is still a significant potential to improve yields, e.g. from fibrous substrates. In addition, process stability should be optimized. Besides evaluating technical measures, improving our understanding of microbial communities involved into the biogas process is considered as key issue to achieve both goals. Microscopic and genetic approaches to analyse community composition provide valuable experimental data, but fail to detect presence of enzymes and overall metabolic activity of microbial communities. Therefore, metaproteomics can significantly contribute to elucidate critical steps in the conversion of biomass to methane as it delivers combined functional and phylogenetic data. Although metaproteomics analyses are challenged by sample impurities, sample complexity and redundant protein identification, and are still limited by the availability of genome sequences, recent studies have shown promising results. In the following, the workflow and potential pitfalls for metaproteomics of samples from full-scale BGP are discussed. In addition, the value of metaproteomics to contribute to the further advancement of microbial ecology is evaluated. Finally, synergistic effects expected when metaproteomics is combined with advanced imaging techniques, metagenomics, metatranscriptomics and metabolomics are addressed. PMID:25874383

  9. CONRAD—A software framework for cone-beam imaging in radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform withmore » extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison between the methods of different groups.« less

  10. A New Image: Online Communities to Facilitate Teacher Professional Development

    ERIC Educational Resources Information Center

    Lock, Jennifer V.

    2006-01-01

    Realizing the potential of online or virtual communities to facilitate teacher professional development requires educators to change their current perceptions of professional development. This calls for educators to develop new images of ongoing opportunities for professional development, based on their needs within an online community of learners…

  11. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  12. The Geoscience Spaceborne Imaging Spectroscopy Technical Committees Calibration and Validation Workshop

    NASA Technical Reports Server (NTRS)

    Ong, Cindy; Mueller, Andreas; Thome, Kurtis; Pierce, Leland E.; Malthus, Timothy

    2016-01-01

    Calibration is the process of quantitatively defining a system's responses to known, controlled signal inputs, and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs [1]. Similar to other Earth observation (EO) sensors, the calibration and validation of spaceborne imaging spectroscopy sensors is a fundamental underpinning activity. Calibration and validation determine the quality and integrity of the data provided by spaceborne imaging spectroscopy sensors and have enormous downstream impacts on the accuracy and reliability of products generated from these sensors. At least five imaging spectroscopy satellites are planned to be launched within the next five years, with the two most advanced scheduled to be launched in the next two years [2]. The launch of these sensors requires the establishment of suitable, standardized, and harmonized calibration and validation strategies to ensure that high-quality data are acquired and comparable between these sensor systems. Such activities are extremely important for the community of imaging spectroscopy users. Recognizing the need to focus on this underpinning topic, the Geoscience Spaceborne Imaging Spectroscopy (previously, the International Spaceborne Imaging Spectroscopy) Technical Committee launched a calibration and validation initiative at the 2013 International Geoscience and Remote Sensing Symposium (IGARSS) in Melbourne, Australia, and a post-conference activity of a vicarious calibration field trip at Lake Lefroy in Western Australia.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renslow, Ryan S.; Lindemann, Stephen R.; Cole, Jessica K.

    EElucidating nutrient exchange in microbial communities is an important step in understanding the relationships between microbial systems and global biogeochemical cycles, but these communities are complex and the interspecies interactions that occur within them are not well understood. Phototrophic consortia are useful and relevant experimental systems to investigate such interactions as they are not only prevalent in the environment, but some are cultivable in vivo and amenable to controlled scientific experimentation. High spatial resolution secondary ion mass spectrometry (NanoSIMS) is a powerful tool capable of visualizing the metabolic activities of single cells within a biofilm, but quantitative analysis of themore » resulting data has typically been a manual process, resulting in a task that is both laborious and susceptible to human error. Here, we describe the creation and application of a semi-automated image-processing pipeline that can analyze NanoSIMS-generated data of phototrophic biofilms. The tool employs an image analysis process, which includes both elemental and morphological segmentation, producing a final segmented image that allows for discrimination between autotrophic and heterotrophic biomass, the detection of individual cyanobacterial filaments and heterotrophic cells, the quantification of isotopic incorporation of individual heterotrophic cells, and calculation of relevant population statistics. We demonstrate the functionality of the tool by using it to analyze the uptake of 15N provided as either nitrate or ammonium through the unicyanobacterial consortium UCC-O and imaged via NanoSIMS. We found that the degree of 15N incorporation by individual cells was highly variable when labeled with 15NH4 +, but much more even when biofilms were labeled with 15NO3-. In the 15NH4 +-amended biofilms, the heterotrophic distribution of 15N incorporation was highly skewed, with a large population showing moderate 15N incorporation and a small number of organisms displaying very high 15N uptake. The results showed that analysis of NanoSIMS data can be performed in a way that allows for quantitation of the elemental uptake of individual cells, a technique necessary for advancing research into the metabolic networks that exist within biofilms with statistical analyses that are supported by automated, user-friendly processes.« less

  14. Community Tools for Cartographic and Photogrammetric Processing of Mars Express HRSC Images

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Howington-Kraus, E.; Edmundson, K.; Redding, B.; Galuszka, D.; Hare, T.; Gwinner, K.

    2017-07-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged  77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was necessary to split observations into blocks of constant exposure time, greatly increasing the effort needed to control the images and collect DTMs. Here, we describe a substantially improved HRSC processing capability that incorporates sensor models with varying line timing in the current ISIS3 system (Sides 2017) and SOCET SET. This enormously reduces the work effort for processing most images and eliminates the artifacts that arose from segmenting them. In addition, the software takes advantage of the continuously evolving capabilities of ISIS3 and the improved image matching module NGATE (Next Generation Automatic Terrain Extraction, incorporating area and feature based algorithms, multi-image and multi-direction matching) of SOCET SET, thus greatly reducing the need for manual editing of DTM errors. We have also developed a procedure for geodetically controlling the images to Mars Orbiter Laser Altimeter (MOLA) data by registering a preliminary stereo topographic model to MOLA by using the point cloud alignment (pc_align) function of the NASA Ames Stereo Pipeline (ASP; Moratto et al. 2010). This effectively converts inter-image tiepoints into ground control points in the MOLA coordinate system. The result is improved absolute accuracy and a significant reduction in work effort relative to manual measurement of ground control. The ISIS and ASP software used are freely available; SOCET SET, is a commercial product. By the end of 2017 we expect to have ported our SOCET SET HRSC sensor model to the Community Sensor Model (CSM; Community Sensor Model Working Group 2010; Hare and Kirk 2017) standard utilized by the successor photogrammetric system SOCET GXP that is currently offered by BAE. In early 2018, we are also working with BAE to release the CSM source code under a BSD or MIT open source license. We illustrate current HRSC processing capabilities with three examples, of which the first two come from the DTM comparison of 2007. Candor Chasma (h1235_0001) was a near-periapse observation with constant exposure time that could be processed relatively easily at that time. We show qualitative and quantitative improvements in DTM resolution and precision as well as greatly reduced need for manual editing, and illustrate some of the photometric applications possible in ISIS. At the Nanedi Valles site we are now able to process all 3 long-arc orbits (h0894_0000, h0905_0000 and h0927_0000) without segmenting the images. Finally, processing image set h4235_0001, which covers the landing site of the Mars Science Laboratory (MSL) rover and its rugged science target of Aeolus Mons in Gale crater, provides a rare opportunity to evaluate DTM resolution and precision because extensive High Resolution Imaging Science Experiment (HiRISE) DTMs are available (Golombek et al. 2012). The HiRISE products have  50x smaller pixel scale so that discrepancies can mostly be attributed to HRSC. We use the HiRISE DTMs to compare the resolution and precision of our HRSC DTMs with the (evolving) standard products. We find that the vertical precision of HRSC DTMs is comparable to the pixel scale but the horizontal resolution may be 15-30 image pixels, depending on processing. This is significantly coarser than the lower limit of 3-5 pixels based on the minimum size for image patches to be matched. Stereo DTMs registered to MOLA altimetry by surface fitting typically deviate by 10thinsp;m or less in mean elevation. Estimates of the RMS deviation are strongly influenced by the sparse sampling of the altimetry, but range from

  15. Application of infrared thermography in computer aided diagnosis

    NASA Astrophysics Data System (ADS)

    Faust, Oliver; Rajendra Acharya, U.; Ng, E. Y. K.; Hong, Tan Jen; Yu, Wenwei

    2014-09-01

    The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.

  16. Long-term 4D Geoelectrical Imaging of Moisture Dynamics in an Active Landslide

    NASA Astrophysics Data System (ADS)

    Uhlemann, S.; Chambers, J. E.; Wilkinson, P. B.; Maurer, H.; Meldrum, P.; Gunn, D.; Smith, A.; Dijkstra, T.

    2016-12-01

    Landslides are a major natural hazard, endangering communities and infrastructure worldwide. Mitigating landslide risk relies on understanding causes and triggering processes, which are often linked to moisture dynamics in slopes causing material softening and elevated pore water pressures. Geoelectrical monitoring is frequently applied to study landslide hydrology. However, its sensitivity to sensor movements has been a challenge for long-term studies on actively failing slopes. Although 2D data acquisition has previously been favoured, it provides limited resolution and relatively poor representation of important 3D landslide structures. We present a novel methodology to incorporate electrode movements into a time-lapse 3D inversion workflow, resulting in a virtually artefact-free time-series of resistivity models. Using temperature correction and laboratory hydro-geophysical relationships, resistivity models are translated into models of moisture content. The data span more than three years, enabling imaging of processes pre- and post landslide reactivation. In the two years before reactivation, the models showed surficial wetting and drying, drainage pathways, and deeper groundwater dynamics. During reactivation, exceptionally high moisture contents were imaged throughout the slope, which was confirmed by independent measurements. Preferential flow was imaged that stabilized parts of the landslide by diverting moisture, and thus dissipating pore pressures, from the slip surface. The results highlight that moisture levels obtained from resistivity monitoring may provide a better activity threshold than rainfall intensity. Based on this work, pro-active remediation measures could be designed and effective early-warning systems implemented. Eventually, resistivity monitoring that can account for moving electrodes may provide a new means for pro-active mitigation of landslide risk, especially for communities and critical infrastructure.

  17. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.

  18. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  19. The APIS service : a tool for accessing value-added HST planetary auroral observations over 1997-2015

    NASA Astrophysics Data System (ADS)

    Lamy, L.; Henry, F.; Prangé, R.; Le Sidaner, P.

    2015-10-01

    The Auroral Planetary Imaging and Spectroscopy (APIS) service http://obspm.fr/apis/ provides an open and interactive access to processed auroral observations of the outer planets and their satellites. Such observations are of interest for a wide community at the interface between planetology, magnetospheric and heliospheric physics. APIS consists of (i) a high level database, built from planetary auroral observations acquired by the Hubble Space Telescope (HST) since 1997 with its mostly used Far-Ultraviolet spectro- imagers, (ii) a dedicated search interface aimed at browsing efficiently this database through relevant conditional search criteria (Figure 1) and (iii) the ability to interactively work with the data online through plotting tools developed by the Virtual Observatory (VO) community, such as Aladin and Specview. This service is VO compliant and can therefore also been queried by external search tools of the VO community. The diversity of available data and the capability to sort them out by relevant physical criteria shall in particular facilitate statistical studies, on long-term scales and/or multi-instrumental multispectral combined analysis [1,2]. We will present the updated capabilities of APIS with several examples. Several tutorials are available online.

  20. Image: Reflecting the National Face of Community Colleges.

    ERIC Educational Resources Information Center

    Kent, Norma

    1996-01-01

    Suggests that there is a "low-to-no-profile syndrome" afflicting community colleges at the national level that must be rectified, highlighting the importance of a national image campaign. Describes results from focus groups indicating doubt over the colleges' quality. Reviews strategies for financing an image campaign and presents potential…

  1. Progress in the Diagnosis of Appendicitis: A Report from Washington State’s Surgical Care and Outcomes Assessment Program (SCOAP)

    PubMed Central

    Drake, Frederick Thurston; Florence, Michael G.; Johnson, Morris G.; Jurkovich, Gregory J.; Kwon, Steve; Schmidt, Zeila; Thirlby, Richard C.; Flum, David R.

    2012-01-01

    BACKGROUND and OBJECTIVES Studies suggest that CT and US can effectively diagnose and rule-out appendicitis, safely reducing negative appendectomies (NA); however, some within the surgical community remain reluctant to add imaging to clinical evaluation of patients with suspected appendicitis. The Surgical Care and Outcomes Assessment Program (SCOAP) is a physician-led quality initiative that monitors performance by benchmarking processes of care and outcomes. Since 2006, accurate diagnosis of appendicitis has been a priority for SCOAP. The objective of this study was to evaluate the association between imaging and NA in the general community. METHODS Data were collected prospectively for consecutive appendectomy patients (age > 15) at nearly 60 hospitals. SCOAP data are obtained directly from clinical records, including radiology, operative, and pathology reports. Multivariate logistic regression models were used to examine the association between imaging and NA. Tests for trends over time were also conducted. RESULTS Among 19,327 patients (47.9% female) who underwent appendectomy, 5.4% had NA. Among patients who were imaged, frequency of NA was 4.5%, whereas among those who were not imaged, NA was 15.4% (p < 0.001). This association was consistent for males (3% vs. 10%, p < 0.001) and for reproductive-age females (6.9% vs. 24.7%, p < 0.001). In a multivariate model adjusted for age, sex, and WBC, odds of NA for patients not imaged were 3.7 times the odds for those who received imaging (95%CI 3.0 – 4.4). Among SCOAP hospitals, use of imaging increased and NA decreased significantly over time; frequency of perforation was unchanged. CONCLUSIONS Patients who were not imaged during work-up for suspected appendicitis had over three times the odds of NA as those who were imaged. Routine imaging in the evaluation of patients suspected to have appendicitis can safely reduce unnecessary operations. Programs such as SCOAP improve care through peer-led, benchmarked practice change. PMID:22964731

  2. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  3. Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging.

    PubMed

    Jiang, Shaowei; Liao, Jun; Bian, Zichao; Guo, Kaikai; Zhang, Yongbing; Zheng, Guoan

    2018-04-01

    A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.

  4. Segmentation of anatomical structures of the heart based on echocardiography

    NASA Astrophysics Data System (ADS)

    Danilov, V. V.; Skirnevskiy, I. P.; Gerget, O. M.

    2017-01-01

    Nowadays, many practical applications in the field of medical image processing require valid and reliable segmentation of images in the capacity of input data. Some of the commonly used imaging techniques are ultrasound, CT, and MRI. However, the main difference between the other medical imaging equipment and EchoCG is that it is safer, low cost, non-invasive and non-traumatic. Three-dimensional EchoCG is a non-invasive imaging modality that is complementary and supplementary to two-dimensional imaging and can be used to examine the cardiovascular function and anatomy in different medical settings. The challenging problems, presented by EchoCG image processing, such as speckle phenomena, noise, temporary non-stationarity of processes, unsharp boundaries, attenuation, etc. forced us to consider and compare existing methods and then to develop an innovative approach that can tackle the problems connected with clinical applications. Actual studies are related to the analysis and development of a cardiac parameters automatic detection system by EchoCG that will provide new data on the dynamics of changes in cardiac parameters and improve the accuracy and reliability of the diagnosis. Research study in image segmentation has highlighted the capabilities of image-based methods for medical applications. The focus of the research is both theoretical and practical aspects of the application of the methods. Some of the segmentation approaches can be interesting for the imaging and medical community. Performance evaluation is carried out by comparing the borders, obtained from the considered methods to those manually prescribed by a medical specialist. Promising results demonstrate the possibilities and the limitations of each technique for image segmentation problems. The developed approach allows: to eliminate errors in calculating the geometric parameters of the heart; perform the necessary conditions, such as speed, accuracy, reliability; build a master model that will be an indispensable assistant for operations on a beating heart.

  5. A forensic science perspective on the role of images in crime investigation and reconstruction.

    PubMed

    Milliet, Quentin; Delémont, Olivier; Margot, Pierre

    2014-12-01

    This article presents a global vision of images in forensic science. The proliferation of perspectives on the use of images throughout criminal investigations and the increasing demand for research on this topic seem to demand a forensic science-based analysis. In this study, the definitions of and concepts related to material traces are revisited and applied to images, and a structured approach is used to persuade the scientific community to extend and improve the use of images as traces in criminal investigations. Current research efforts focus on technical issues and evidence assessment. This article provides a sound foundation for rationalising and explaining the processes involved in the production of clues from trace images. For example, the mechanisms through which these visual traces become clues of presence or action are described. An extensive literature review of forensic image analysis emphasises the existing guidelines and knowledge available for answering investigative questions (who, what, where, when and how). However, complementary developments are still necessary to demystify many aspects of image analysis in forensic science, including how to review and select images or use them to reconstruct an event or assist intelligence efforts. The hypothetico-deductive reasoning pathway used to discover unknown elements of an event or crime can also help scientists understand the underlying processes involved in their decision making. An analysis of a single image in an investigative or probative context is used to demonstrate the highly informative potential of images as traces and/or clues. Research efforts should be directed toward formalising the extraction and combination of clues from images. An appropriate methodology is key to expanding the use of images in forensic science. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Landscape processes, effects and the consequences of migration in their management at the Jatún Mayu watershed (Bolivia)

    NASA Astrophysics Data System (ADS)

    Penna, Ivanna; Jaquet, Stephanie; Sudmeier-Rieux, Karen; Kaenzig, Raoul; Schwilch, Gudrun; Jaboyedoff, Michel; Liniger, Hanspeter; Machaca, Angelica; Cuba, Edgar; Boillat, Sebastien

    2014-05-01

    Bolivia has a large rural population, mostly composed of subsistence farmers that face natural and anthropogenic driven processes affecting their livelihoods. In order to establish sustainable management strategies, it is important to understand the factors governing landscape changes. This work explores the geomorphic imprint and effects of natural and anthropogenic driven processes on three mountain communities undergoing de-population in the Jatún Mayu watershed (Cochabamba, Bolivia). Based on satellite image interpretation, field work and household surveys, we have identified gullies and landslides as main active processes, causing land losses, affecting inter-communal roads, etc. While landslides are mostly occurring in the middle and lower section of the basin, gullies are especially affecting the upper part (especially the southern slope). Our analysis indicated that in the middle and lower part of the basin, landslides are developing in response to the Jatún Mayu incision (slopes reach a critical angle and slope failures increase). However in the upper part, where no river down-cutting is taking place, preliminary analysis indicates that past and present human interventions (over-grazing, agriculture, road construction, etc.) play a key role on driving land degradation toward the formation of gullies. Based on the comparison of high resolution images from 2004 and 2009, we determined an agricultural land loss rate of 8452 m2/year, mostly in the form of landslides. One single event swept away 0.03 km2 of agricultural lands (~13 parcels), approximately 87% of an average household property. People's main concerns are hail, frost and droughts because they cause an "immediate" loss on family incomes, but the impacts caused by landslides and gullies are not disregarded by the communities and the government. Communities are organized to set up and maintain key infrastructure such as irrigation canals and roads. They also intend to develop protective measures against erosion like check dams based on tyres filled with rocks. In addition, organizations supported by government and institutions from abroad have built dams, reforested some slopes, and raised local capacities to improve soil conservation measures e.g. through slow-forming terraces. However, rural-to-urban migration could be affecting the management of processes leading to land degradation. Around 77% of the 22 households surveyed have at least one migrant family member (permanent, seasonal or double residence migrant). Labour force is reduced and because of de-population, two of the three schools in the area have closed. In spite of the support that communities receive, our findings indicate that high population mobility is affecting land management practices and the capacity of communities to cope with land degradation processes.

  7. After the Fall: The RHESSI Legacy Archive

    NASA Astrophysics Data System (ADS)

    Schwartz, Richard A.; Zarro, Dominic M.; Tolbert, Anne K.

    2017-08-01

    Launched in 2002 the Ramaty High Energy Solar Spectroscopic Imager (RHESSI) continues to observe the Sun with a nearly 50% duty cycle. During that time the instrument has recorded ~100,000 solar flares in energies from 4 keV to over 10 MeV.with durations of 10s to 1000s of seconds. However, for the reasons of the decline of the solar cycle, possible failure of the instrument, or the absence of funding, our operational phase will end someday. We describe here our plans to continue to serve this dataset in raw, processed, and analyzed forms to the worldwide solar community to continue our legacy of a stream of rich scientific results.We have and are providing a stream of quicklook lightcurves, spectra, and images that we mainly serve through a web interface as well as the data in raw form to be fully analyzed within our own branch of Solar Software written in IDL. We are in the process of creating higher quality images for flares in multiple energy bands on relevant timescales for those whose needs can be met without further processing. For users with IDL licenses we expect this software to be available far into the unknowable future. Together with a database of AIA cutouts during all SDO-era flares, along with software to recover saturated images by using the AIA diffraction fringes, these will be a highly used resource.We also are developing additional tools and databases that will increase the utility of RHESSI data to members of the community with and without either IDL licenses or full access to the RHESSI database. We will provide a database of RHESSI x-ray visibilities obtained during flares at a >4 second cadence over a broad range of detectable energies. With our IDL software those can be rendered as images for times and energies of nearly the analysts's choosing. And going beyond that we are converting our imaging procedures to the Python language to eliminate the need for an IDL license. We are also developing methods to allow the customization of these visibilities in time and energy by access from a non-local server which has full access to all of the IDL software and database files.

  8. Error modeling and analysis of star cameras for a class of 1U spacecraft

    NASA Astrophysics Data System (ADS)

    Fowler, David M.

    As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.

  9. Picturing the nurse-person/family/community process in the year 2050.

    PubMed

    Mitchell, Gail J

    2007-01-01

    How will nurses relate with persons in the year 2050? And, how might technology enable or limit the nursing process with persons, families, and communities? These are the questions addressed in this column. Imaging practice in light of the technological imaginings and projections is facilitated by a possible scenario that includes robotics that not only monitor human biological processes, they also emote compassion and caring that may one day be dosed according to the latest diagnostic prescription. Three nurses in this column present their views of how nursing might evolve. Karnick, aligned with the human becoming school of thought, imagines a practice anchored in respect for humanity and quality of life and an accompanying respect for nursing knowledge and nursing work. Senesac and Sato, aligned with Roy's adaptation model, call for nurses to envision and choose the future they want to have. Clear in both perspectives is a reverence for human values and human experience and for the critical role of nursing knowledge as we move toward the not-yet of 2050.

  10. Phenological dynamics of arctic tundra vegetation and its implications on satellite imagery interpretation

    NASA Astrophysics Data System (ADS)

    Juutinen, Sari; Aurela, Mika; Mikola, Juha; Räsänen, Aleksi; Virtanen, Tarmo

    2016-04-01

    Remote sensing is a key methodology when monitoring the responses of arctic ecosystems to climatic warming. The short growing season and rapid vegetation development, however, set demands to the timing of image acquisition in the arctic. We used multispectral very high spatial resolution satellite images to study the effect of vegetation phenology on the spectral reflectance and image interpretation in the low arctic tundra in coastal Siberia (Tiksi, 71°35'39"N, 128°53'17"E). The study site mainly consists of peatlands, tussock, dwarf shrub, and grass tundra, and stony areas with some lichen and shrub patches. We tested the hypotheses that (1) plant phenology is responsive to the interannual weather variation and (2) the phenological state of vegetation has an impact on satellite image interpretation and the ability to distinguish between the plant communities. We used an empirical transfer function with temperature sums as drivers to reconstruct daily leaf area index (LAI) for the different plant communities for years 2005, and 2010-2014 based on measured LAI development in summer 2014. Satellite images, taken during growing seasons, were acquired for two years having late and early spring, and short and long growing season, respectively. LAI dynamics showed considerable interannual variation due to weather variation, and particularly the relative contribution of graminoid dominated communities was sensitive to these phenology shifts. We have also analyzed the differences in the reflectance values between the two satellite images taking account the LAI dynamics. These results will increase our understanding of the pitfalls that may arise from the timing of image acquisition when interpreting the vegetation structure in a heterogeneous tundra landscape. Very high spatial resolution multispectral images are available at reasonable cost, but not in high temporal resolution, which may lead to compromises when matching ground truth and the imagery. On the other hand, to identify existing plant communities, high resolution images are needed due fragmented nature of tundra vegetation communities. Temporal differences in the phenology among different plant functional types may also obscure the image interpretations when using spatially low resolution images in heterogeneous landscapes. Phenological features of plant communities should be acknowledged, when plant functional or community type based classifications are used in models to estimate global greenhouse gas emissions and when monitoring changes in vegetation are monitored, for example to indicate permafrost thawing or changes in growing season lengths.

  11. Education resources in remote Australian Indigenous community dog health programs: a comparison of community and extra-community-produced resources.

    PubMed

    Constable, Sophie Elizabeth; Dixon, Roselyn May; Dixon, Robert John

    2013-09-01

    Commercial dog health programs in Australian Indigenous communities are a relatively recent occurrence. Health promotion for these programs is an even more recent development, and lacks data on effective practices. This paper analyses 38 resources created by veterinary-community partnerships in Indigenous communities, to 71 resources available through local veterinary service providers. On average, community-produced resources used significantly more of the resource area as image, more imagery as communicative rather than decorative images, larger fonts and smaller segments of text and used images of people with a range of skin tones. As well as informal registers of Standard Australian English, community-produced resources used Aboriginal English and/or Creole languages in their text, while extra-community (EC)-produced resources did not. The text of EC resources had Flesh-Kincaid reading grade levels that excluded a large proportion of community recipients. Also, they did not cover some topics of importance in communities, used academic, formal and technical language, and did not depict people of a representative range of skin tones. As such, community-produced resources were more relevant to the unique situations in remote communities, while EC resources were often inappropriate and in some cases could even distance recipients by using inappropriate language, formats and imagery.

  12. Cultivating Bakhtin in the garden: Children's ecological narratives on becoming community gardeners

    NASA Astrophysics Data System (ADS)

    Grugel, Annie H.

    2009-12-01

    This dissertation illustrates how a children's community garden, designed specifically to promote intergenerational, multi-sociocultural relationships, is an "ideological environment" linking individuals and their community and connecting people with nature, in order to promote feelings of belonging, social connection, and encourage a sense of stewardship and identification with the environment (Bakhtin, 1978). By spending time in a community garden, responding to the natural ecosystems which exist on this land, and reflecting, through image and story about our childhood experience, the participants and I engaged in the dialogic process of what Thomashow (1996) refers to as "doing ecological identity work." Throughout this study I question how our past experiences with nature in ideological environments shape our ecological epistemologies, and how the dialogic process of becoming a gardener within the context of a community garden shapes a person's ecological identity. To frame this exploration of ecological identity work as a dialogic process and its role in the development of an ecological identity, I draw from sociocultural theory (Holland, et al., 1998), Bakhtin's theory of dialogism, and ecological identity studies (Clayton and Opotow, 2003; Cobb, 1993; Orr, 1994, 2006; Sobel, 1996, 2008; Thomashow, 1996). A large body of scholarly writing done by environmental researchers is devoted to examining and describing how adults, who self-identify as environmentalists, developed an ecological worldview. However, only a fraction of research is devoted to theorizing how children develop an environmental epistemology. In this study, I focus on how community gardens are dialogic spaces that provide a place for elementary-aged children to "experience" the discourse of gardening. Here, I describe the discourses that shape the garden and describe how gardeners, as a result of their collaborative experiences between human and non-human actors, take up social and dialogical tools for authoring new ecological identities.

  13. Bacterial Community Structure and Physiological State within an Industrial Phenol Bioremediation System

    PubMed Central

    Whiteley, Andrew S.; Bailey, Mark J.

    2000-01-01

    The structure of bacterial populations in specific compartments of an operational industrial phenol remediation system was assessed to examine bacterial community diversity, distribution, and physiological state with respect to the remediation of phenolic polluted wastewater. Rapid community fingerprinting by PCR-based denaturing gradient gel electrophoresis (DGGE) of 16S rDNA indicated highly structured bacterial communities residing in all nine compartments of the treatment plant and not exclusively within the Vitox biological reactor. Whole-cell targeting by fluorescent in situ hybridization with specific oligonucleotides (directed to the α, β and γ subclasses of the class Proteobacteria [α-, β-, and γ-Proteobacteria, respectively], the Cytophaga-Flavobacterium group, and the Pseudomonas group) tended to mirror gross changes in bacterial community composition when compared with DGGE community fingerprinting. At the whole-cell level, the treatment compartments were numerically dominated by cells assigned to the Cytophaga-Flavobacterium group and to the γ-Proteobacteria. The α subclass Proteobacteria were of low relative abundance throughout the treatment system whilst the β subclass of the Proteobacteria exhibited local dominance in several of the processing compartments. Quantitative image analyses of cellular fluorescence was used as an indicator of physiological state within the populations probed with rDNA. For cells hybridized with EUB338, the mean fluorescence per cell decreased with increasing phenolic concentration, indicating the strong influence of the primary pollutant upon cellular rRNA content. The γ subclass of the Proteobacteria had a ribosome content which correlated positively with total phenolics and thiocyanate. While members of the Cytophaga-Flavobacterium group were numerically dominant in the processing system, their abundance and ribosome content data for individual populations did not correlate with any of the measured chemical parameters. The potential importance of the γ-Proteobacteria and the Cytophaga-Flavobacteria during this bioremediation process was highlighted. PMID:10831417

  14. BioShaDock: a community driven bioinformatics shared Docker-based tools registry

    PubMed Central

    Moreews, François; Sallou, Olivier; Ménager, Hervé; Le bras, Yvan; Monjeaud, Cyril; Blanchet, Christophe; Collin, Olivier

    2015-01-01

    Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientific software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difficult for a bioinformatics user to find the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts defined in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user defined tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community. PMID:26913191

  15. BioShaDock: a community driven bioinformatics shared Docker-based tools registry.

    PubMed

    Moreews, François; Sallou, Olivier; Ménager, Hervé; Le Bras, Yvan; Monjeaud, Cyril; Blanchet, Christophe; Collin, Olivier

    2015-01-01

    Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientific software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difficult for a bioinformatics user to find the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts defined in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user defined tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community.

  16. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.

    PubMed

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

  17. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

    PubMed Central

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform. PMID:26401431

  18. Age discrimination among eruptives of Menengai Caldera, Kenya, using vegetation parameters from satellite imagery

    NASA Technical Reports Server (NTRS)

    Blodget, Herbert W.; Heirtzler, James R.

    1993-01-01

    Results are presented of an investigation to determine the degree to which digitally processed Landsat TM imagery can be used to discriminate among vegetated lava flows of different ages in the Menengai Caldera, Kenya. A selective series of five images, consisting of a color-coded Landsat 5 classification and four color composites, are compared with geologic maps. The most recent of more than 70 postcaldera flows within the caldera are trachytes, which are variably covered by shrubs and subsidiary grasses. Soil development evolves as a function of time, and as such supports a changing plant community. Progressively older flows exhibit the increasing dominance of grasses over bushes. The Landsat images correlated well with geologic maps, but the two mapped age classes could be further subdivided on the basis of different vegetation communities. It is concluded that field maps can be modified, and in some cases corrected by use of such imagery, and that digitally enhanced Landsat imagery can be a useful aid to field mapping in similar terrains.

  19. Lunar Processing Cabinet 2.0: Retrofitting Gloveboxes into the 21st Century

    NASA Technical Reports Server (NTRS)

    Calaway, M. J.

    2015-01-01

    In 2014, the Apollo 16 Lunar Processing Glovebox (cabinet 38) in the Lunar Curation Laboratory at NASA JSC received an upgrade including new technology interfaces. A Jacobs - Technology Innovation Project provided the primary resources to retrofit this glovebox into the 21st century. NASA Astromaterials Acquisition & Curation Office continues the over 40 year heritage of preserving lunar materials for future scientific studies in state-of-the-art facilities. This enhancement has not only modernized the contamination controls, but provides new innovative tools for processing and characterizing lunar samples as well as supports real-time exchange of sample images and information with the scientific community throughout the world.

  20. SCIFIO: an extensible framework to support scientific image formats.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-12-07

    No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.

  1. Lithographic image simulation for the 21st century with 19th-century tools

    NASA Astrophysics Data System (ADS)

    Gordon, Ronald L.; Rosenbluth, Alan E.

    2004-01-01

    Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.

  2. An Extensible Processing Framework for Eddy-covariance Data

    NASA Astrophysics Data System (ADS)

    Durden, D.; Fox, A. M.; Metzger, S.; Sturtevant, C.; Durden, N. P.; Luo, H.

    2016-12-01

    The evolution of large data collecting networks has not only led to an increase of available information, but also in the complexity of analyzing the observations. Timely dissemination of readily usable data products necessitates a streaming processing framework that is both automatable and flexible. Tower networks, such as ICOS, Ameriflux, and NEON, exemplify this issue by requiring large amounts of data to be processed from dispersed measurement sites. Eddy-covariance data from across the NEON network are expected to amount to 100 Gigabytes per day. The complexity of the algorithmic processing necessary to produce high-quality data products together with the continued development of new analysis techniques led to the development of a modular R-package, eddy4R. This allows algorithms provided by NEON and the larger community to be deployed in streaming processing, and to be used by community members alike. In order to control the processing environment, provide a proficient parallel processing structure, and certify dependencies are available during processing, we chose Docker as our "Development and Operations" (DevOps) platform. The Docker framework allows our processing algorithms to be developed, maintained and deployed at scale. Additionally, the eddy4R-Docker framework fosters community use and extensibility via pre-built Docker images and the Github distributed version control system. The capability to process large data sets is reliant upon efficient input and output of data, data compressibility to reduce compute resource loads, and the ability to easily package metadata. The Hierarchical Data Format (HDF5) is a file format that can meet these needs. A NEON standard HDF5 file structure and metadata attributes allow users to explore larger data sets in an intuitive "directory-like" structure adopting the NEON data product naming conventions.

  3. Wavelength calibration of imaging spectrometer using atmospheric absorption features

    NASA Astrophysics Data System (ADS)

    Zhou, Jiankang; Chen, Yuheng; Chen, Xinhua; Ji, Yiqun; Shen, Weimin

    2012-11-01

    Imaging spectrometer is a promising remote sensing instrument widely used in many filed, such as hazard forecasting, environmental monitoring and so on. The reliability of the spectral data is the determination to the scientific communities. The wavelength position at the focal plane of the imaging spectrometer will change as the pressure and temperature vary, or the mechanical vibration. It is difficult for the onboard calibration instrument itself to keep the spectrum reference accuracy and it also occupies weight and the volume of the remote sensing platform. Because the spectral images suffer from the atmospheric effects, the carbon oxide, water vapor, oxygen and solar Fraunhofer line, the onboard wavelength calibration can be processed by the spectral images themselves. In this paper, wavelength calibration is based on the modeled and measured atmospheric absorption spectra. The modeled spectra constructed by the atmospheric radiative transfer code. The spectral angle is used to determine the best spectral similarity between the modeled spectra and measured spectra and estimates the wavelength position. The smile shape can be obtained when the matching process across all columns of the data. The present method is successful applied on the Hyperion data. The value of the wavelength shift is obtained by shape matching of oxygen absorption feature and the characteristics are comparable to that of the prelaunch measurements.

  4. Using modern imaging techniques to old HST data: a summary of the ALICE program.

    NASA Astrophysics Data System (ADS)

    Choquet, Elodie; Soummer, Remi; Perrin, Marshall; Pueyo, Laurent; Hagan, James Brendan; Zimmerman, Neil; Debes, John Henry; Schneider, Glenn; Ren, Bin; Milli, Julien; Wolff, Schuyler; Stark, Chris; Mawet, Dimitri; Golimowski, David A.; Hines, Dean C.; Roberge, Aki; Serabyn, Eugene

    2018-01-01

    Direct imaging of extrasolar systems is a powerful technique to study the physical properties of exoplanetary systems and understand their formation and evolution mechanisms. The detection and characterization of these objects are challenged by their high contrast with their host star. Several observing strategies and post-processing algorithms have been developed for ground-based high-contrast imaging instruments, enabling the discovery of directly-imaged and spectrally-characterized exoplanets. The Hubble Space Telescope (HST), pioneer in directly imaging extrasolar systems, has yet been often limited to the detection of bright debris disks systems, with sensitivity limited by the difficulty to implement an optimal PSF subtraction stategy, which is readily offered on ground-based telescopes in pupil tracking mode.The Archival Legacy Investigations of Circumstellar Environments (ALICE) program is a consistent re-analysis of the 10 year old coronagraphic archive of HST's NICMOS infrared imager. Using post-processing methods developed for ground-based observations, we used the whole archive to calibrate PSF temporal variations and improve NICMOS's detection limits. We have now delivered ALICE-reprocessed science products for the whole NICMOS archival data back to the community. These science products, as well as the ALICE pipeline, were used to prototype the JWST coronagraphic data and reduction pipeline. The ALICE program has enabled the detection of 10 faint debris disk systems never imaged before in the near-infrared and several substellar companion candidates, which we are all in the process of characterizing through follow-up observations with both ground-based facilities and HST-STIS coronagraphy. In this publication, we provide a summary of the results of the ALICE program, advertise its science products and discuss the prospects of the program.

  5. Photogrammetry on glaciers: Old and new knowledge

    NASA Astrophysics Data System (ADS)

    Pfeffer, W. T.; Welty, E.; O'Neel, S.

    2014-12-01

    In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.

  6. Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data

    NASA Astrophysics Data System (ADS)

    Kumar, C.; Shetty, A.; Raval, S.; Champatiray, P. K.; Sharma, R.

    2014-11-01

    This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.

  7. Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans

    NASA Astrophysics Data System (ADS)

    Boussaha, M.; Vallet, B.; Rives, P.

    2018-05-01

    The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.

  8. The Next-Generation Very Large Array: Technical Overview

    NASA Astrophysics Data System (ADS)

    McKinnon, Mark; Selina, Rob

    2018-01-01

    As part of its mandate as a national observatory, the NRAO is looking toward the long range future of radio astronomy and fostering the long term growth of the US astronomical community. NRAO has sponsored a series of science and technical community meetings to consider the science mission and design of a next-generation Very Large Array (ngVLA), building on the legacies of the Atacama Large Millimeter/submillimeter Array (ALMA) and the Very Large Array (VLA).The basic ngVLA design emerging from these discussions is an interferometric array with approximately ten times the sensitivity and ten times higher spatial resolution than the VLA and ALMA radio telescopes, optimized for operation in the wavelength range 0.3cm to 3cm. The ngVLA would open a new window on the Universe through ultra-sensitive imaging of thermal line and continuum emission down to milli-arcsecond resolution, as well as unprecedented broadband continuum polarimetric imaging of non-thermal processes. The specifications and concepts for major ngVLA system elements are rapidly converging.We will provide an overview of the current system design of the ngVLA. The concepts for major system elements such as the antenna, receiving electronics, and central signal processing will be presented. We will also describe the major development activities that are presently underway to advance the design.

  9. Contextualising and Analysing Planetary Rover Image Products through the Web-Based PRoGIS

    NASA Astrophysics Data System (ADS)

    Morley, Jeremy; Sprinks, James; Muller, Jan-Peter; Tao, Yu; Paar, Gerhard; Huber, Ben; Bauer, Arnold; Willner, Konrad; Traxler, Christoph; Garov, Andrey; Karachevtseva, Irina

    2014-05-01

    The international planetary science community has launched, landed and operated dozens of human and robotic missions to the planets and the Moon. They have collected various surface imagery that has only been partially utilized for further scientific purposes. The FP7 project PRoViDE (Planetary Robotics Vision Data Exploitation) is assembling a major portion of the imaging data gathered so far from planetary surface missions into a unique database, bringing them into a spatial context and providing access to a complete set of 3D vision products. Processing is complemented by a multi-resolution visualization engine that combines various levels of detail for a seamless and immersive real-time access to dynamically rendered 3D scenes. PRoViDE aims to (1) complete relevant 3D vision processing of planetary surface missions, such as Surveyor, Viking, Pathfinder, MER, MSL, Phoenix, Huygens, and Lunar ground-level imagery from Apollo, Russian Lunokhod and selected Luna missions, (2) provide highest resolution & accuracy remote sensing (orbital) vision data processing results for these sites to embed the robotic imagery and its products into spatial planetary context, (3) collect 3D Vision processing and remote sensing products within a single coherent spatial data base, (4) realise seamless fusion between orbital and ground vision data, (5) demonstrate the potential of planetary surface vision data by maximising image quality visualisation in 3D publishing platform, (6) collect and formulate use cases for novel scientific application scenarios exploiting the newly introduced spatial relationships and presentation, (7) demonstrate the concepts for MSL, (9) realize on-line dissemination of key data & its presentation by a web-based GIS and rendering tool named PRoGIS (Planetary Robotics GIS). PRoGIS is designed to give access to rover image archives in geographical context, using projected image view cones, obtained from existing meta-data and updated according to processing results, as a means to interact with and explore the archive. However PRoGIS is more than a source data explorer. It is linked to the PRoVIP (Planetary Robotics Vision Image Processing) system which includes photogrammetric processing tools to extract terrain models, compose panoramas, and explore and exploit multi-view stereo (where features on the surface have been imaged from different rover stops). We have started with the Opportunity MER rover as our test mission but the system is being designed to be multi-mission, taking advantage in particular of UCL MSSL's PDS mirror, and we intend to at least deal with both MER rovers and MSL. For the period of ProViDE until end of 2015 the further intent is to handle lunar and other Martian rover & descent camera data. The presentation discusses the challenges of integrating rover and orbital derived data into a single geographical framework, especially reconstructing view cones; our human-computer interaction intentions in creating an interface to the rover data that is accessible to planetary scientists; how we handle multi-mission data in the database; and a demonstration of the resulting system & its processing capabilities. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 PRoViDE.

  10. Mapping and monitoring changes in vegetation communities of Jasper Ridge, CA, using spectral fractions derived from AVIRIS images

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Roberts, Dar A.; Adams, John B.; Smith, Milton O.

    1993-01-01

    An important application of remote sensing is to map and monitor changes over large areas of the land surface. This is particularly significant with the current interest in monitoring vegetation communities. Most of traditional methods for mapping different types of plant communities are based upon statistical classification techniques (i.e., parallel piped, nearest-neighbor, etc.) applied to uncalibrated multispectral data. Classes from these techniques are typically difficult to interpret (particularly to a field ecologist/botanist). Also, classes derived for one image can be very different from those derived from another image of the same area, making interpretation of observed temporal changes nearly impossible. More recently, neural networks have been applied to classification. Neural network classification, based upon spectral matching, is weak in dealing with spectral mixtures (a condition prevalent in images of natural surfaces). Another approach to mapping vegetation communities is based on spectral mixture analysis, which can provide a consistent framework for image interpretation. Roberts et al. (1990) mapped vegetation using the band residuals from a simple mixing model (the same spectral endmembers applied to all image pixels). Sabol et al. (1992b) and Roberts et al. (1992) used different methods to apply the most appropriate spectral endmembers to each image pixel, thereby allowing mapping of vegetation based upon the the different endmember spectra. In this paper, we describe a new approach to classification of vegetation communities based upon the spectra fractions derived from spectral mixture analysis. This approach was applied to three 1992 AVIRIS images of Jasper Ridge, California to observe seasonal changes in surface composition.

  11. Interoperability in planetary research for geospatial data analysis

    NASA Astrophysics Data System (ADS)

    Hare, Trent M.; Rossi, Angelo P.; Frigeri, Alessandro; Marmo, Chiara

    2018-01-01

    For more than a decade there has been a push in the planetary science community to support interoperable methods for accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (e.g., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized geospatial image formats, geologic mapping conventions, U.S. Federal Geographic Data Committee (FGDC) cartographic and metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Map Tile Services (cached image tiles), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they can be just as valuable for planetary domain. Another initiative, called VESPA (Virtual European Solar and Planetary Access), will marry several of the above geoscience standards and astronomy-based standards as defined by International Virtual Observatory Alliance (IVOA). This work outlines the current state of interoperability initiatives in use or in the process of being researched within the planetary geospatial community.

  12. Rapid development of medical imaging tools with open-source libraries.

    PubMed

    Caban, Jesus J; Joshi, Alark; Nagy, Paul

    2007-11-01

    Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.

  13. Using destination image to predict visitors' intention to revisit three Hudson River Valley, New York, communities

    Treesearch

    Rudy M. Schuster; Laura Sullivan; Duarte Morais; Diane Kuehn

    2009-01-01

    This analysis explores the differences in Affective and Cognitive Destination Image among three Hudson River Valley (New York) tourism communities. Multiple regressions were used with six dimensions of visitors' images to predict future intention to revisit. Two of the three regression models were significant. The only significantly contributing independent...

  14. Design Application Translates 2-D Graphics to 3-D Surfaces

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Fabric Images Inc., specializing in the printing and manufacturing of fabric tension architecture for the retail, museum, and exhibit/tradeshow communities, designed software to translate 2-D graphics for 3-D surfaces prior to print production. Fabric Images' fabric-flattening design process models a 3-D surface based on computer-aided design (CAD) specifications. The surface geometry of the model is used to form a 2-D template, similar to a flattening process developed by NASA's Glenn Research Center. This template or pattern is then applied in the development of a 2-D graphic layout. Benefits of this process include 11.5 percent time savings per project, less material wasted, and the ability to improve upon graphic techniques and offer new design services. Partners include Exhibitgroup/Giltspur (end-user client: TAC Air, a division of Truman Arnold Companies Inc.), Jack Morton Worldwide (end-user client: Nickelodeon), as well as 3D Exhibits Inc., and MG Design Associates Corp.

  15. Room acoustics analysis using circular arrays: an experimental study based on sound field plane-wave decomposition.

    PubMed

    Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo

    2013-04-01

    Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.

  16. An Exploratory Study of Residents' Perception of Place Image: The Case of Kavala.

    PubMed

    Stylidis, Dimitrios; Sit, Jason; Biran, Avital

    2016-05-01

    Studies on place image have predominantly focused on the tourists' destination image and have given limited attention to other stakeholders' perspectives. This study aims to address this gap by focusing on the notion of residents' place image, whereby it reviews existing literature on residents' place image in terms of whether common attributes can be identified, and examines the role of community-focused attributes in its measurement. Data collected from a sample of 481 Kavala residents (Greece) were subjected to exploratory and confirmatory factor analysis. The study reveals that the existing measurement tools have typically emphasized destination-focused attributes and neglected community-focused attributes. This study contributes to the residents' place image research by proposing a more holistic measurement, which consisted of four dimensions: physical appearance, community services, social environment, and entertainment opportunities. The study also offers practical insights for developing and promoting a tourist place while simultaneously enhancing its residents' quality of life.

  17. An Exploratory Study of Residents’ Perception of Place Image

    PubMed Central

    Stylidis, Dimitrios; Sit, Jason; Biran, Avital

    2014-01-01

    Studies on place image have predominantly focused on the tourists’ destination image and have given limited attention to other stakeholders’ perspectives. This study aims to address this gap by focusing on the notion of residents’ place image, whereby it reviews existing literature on residents’ place image in terms of whether common attributes can be identified, and examines the role of community-focused attributes in its measurement. Data collected from a sample of 481 Kavala residents (Greece) were subjected to exploratory and confirmatory factor analysis. The study reveals that the existing measurement tools have typically emphasized destination-focused attributes and neglected community-focused attributes. This study contributes to the residents’ place image research by proposing a more holistic measurement, which consisted of four dimensions: physical appearance, community services, social environment, and entertainment opportunities. The study also offers practical insights for developing and promoting a tourist place while simultaneously enhancing its residents’ quality of life. PMID:29708109

  18. The salt marsh vegetation spread dynamics simulation and prediction based on conditions optimized CA

    NASA Astrophysics Data System (ADS)

    Guan, Yujuan; Zhang, Liquan

    2006-10-01

    The biodiversity conservation and management of the salt marsh vegetation relies on processing their spatial information. Nowadays, more attentions are focused on their classification surveying and describing qualitatively dynamics based on RS images interpreted, rather than on simulating and predicting their dynamics quantitatively, which is of greater importance for managing and planning the salt marsh vegetation. In this paper, our notion is to make a dynamic model on large-scale and to provide a virtual laboratory in which researchers can run it according requirements. Firstly, the characteristic of the cellular automata was analyzed and a conclusion indicated that it was necessary for a CA model to be extended geographically under varying conditions of space-time circumstance in order to make results matched the facts accurately. Based on the conventional cellular automata model, the author introduced several new conditions to optimize it for simulating the vegetation objectively, such as elevation, growth speed, invading ability, variation and inheriting and so on. Hence the CA cells and remote sensing image pixels, cell neighbors and pixel neighbors, cell rules and nature of the plants were unified respectively. Taking JiuDuanSha as the test site, where holds mainly Phragmites australis (P.australis) community, Scirpus mariqueter (S.mariqueter) community and Spartina alterniflora (S.alterniflora) community. The paper explored the process of making simulation and predictions about these salt marsh vegetable changing with the conditions optimized CA (COCA) model, and examined the links among data, statistical models, and ecological predictions. This study exploited the potential of applying Conditioned Optimized CA model technique to solve this problem.

  19. Mapping grass communities based on multi-temporal Landsat TM imagery and environmental variables

    NASA Astrophysics Data System (ADS)

    Zeng, Yuandi; Liu, Yanfang; Liu, Yaolin; de Leeuw, Jan

    2007-06-01

    Information on the spatial distribution of grass communities in wetland is increasingly recognized as important for effective wetland management and biological conservation. Remote sensing techniques has been proved to be an effective alternative to intensive and costly ground surveys for mapping grass community. However, the mapping accuracy of grass communities in wetland is still not preferable. The aim of this paper is to develop an effective method to map grass communities in Poyang Lake Natural Reserve. Through statistic analysis, elevation is selected as an environmental variable for its high relationship with the distribution of grass communities; NDVI stacked from images of different months was used to generate Carex community map; the image in October was used to discriminate Miscanthus and Cynodon communities. Classifications were firstly performed with maximum likelihood classifier using single date satellite image with and without elevation; then layered classifications were performed using multi-temporal satellite imagery and elevation with maximum likelihood classifier, decision tree and artificial neural network separately. The results show that environmental variables can improve the mapping accuracy; and the classification with multitemporal imagery and elevation is significantly better than that with single date image and elevation (p=0.001). Besides, maximum likelihood (a=92.71%, k=0.90) and artificial neural network (a=94.79%, k=0.93) perform significantly better than decision tree (a=86.46%, k=0.83).

  20. The 2014 interferometric imaging beauty contest

    NASA Astrophysics Data System (ADS)

    Monnier, John D.; Berger, Jean-Philippe; Le Bouquin, Jean-Baptiste; Tuthill, Peter G.; Wittkowski, Markus; Grellmann, Rebekka; Müller, André; Renganswany, Sridhar; Hummel, Christian; Hofmann, Karl-Heinz; Schertl, Dieter; Weigelt, Gerd; Young, John; Buscher, David; Sanchez-Bermudez, Joel; Alberdi, Antxon; Schoedel, Rainer; Köhler, Rainer; Soulez, Ferréol; Thiébaut, Éric; Kluska, Jacques; Malbet, Fabien; Duvert, Gilles; Kraus, Stefan; Kloppenborg, Brian K.; Baron, Fabien; de Wit, Willem-Jan; Rivinius, Thomas; Merand, Antoine

    2014-07-01

    Here we present the results of the 6th biennial optical interferometry imaging beauty contest. Taking advantage of a unique opportunity, the red supergiant VY CMa and the Mira variable R Car were observed in the astronomical H-band with three 4-telescope configurations of the VLTI-AT array using the PIONIER instrument. The community was invited to participate in the subsequent image reconstruction and interpretation phases of the project. Ten groups submitted entries to the beauty contest, and we found reasonable consistency between images obtained from independent workers using quite different algorithms. We also found that significant differences existed between the submitted images, much greater than in past beauty contests that were all based on simulated data. A novel crowd-sourcing" method allowed consensus median images to be constructed, filtering likely artifacts and retaining real features." We definitively detect strong spots on the surfaces of both stars as well as distinct circumstellar shells of emission (likely water/CO) around R Car. In a close contest, Joel Sanchez (IAA-CSIC/Spain) was named the winner of the 2014 interferometric imaging beauty contest. This process has shown that new comers" can use publicly-available imaging software to interpret VLTI/PIONIER imaging data, as long as sufficient observations are taken to have complete uv coverage { a luxury that is often missing. We urge proposers to request adequate observing nights to collect sufficient data for imaging and for time allocation committees to recognise the importance of uv coverage for reliable interpretation of interferometric data. We believe that the result of the proposed broad international project will contribute to inspiring trust in the image reconstruction processes in optical interferometry.

  1. Multi-temporal database of High Resolution Stereo Camera (HRSC) images - Alpha version

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.

    2014-04-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. The High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires highresolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is important to detect the surface changes that occurred between two or more images.

  2. The Image of the Community College: Faculty Perceptions at Mercer County Community College.

    ERIC Educational Resources Information Center

    Dietrich, Marilyn L.

    As part of an effort to improve the image of Mercer County Community College, in New Jersey, a faculty member conducted interviews of 15 colleagues and 4 students to determine their perceptions of the college. Participants were asked about their present attitudes towards the college, their views when they first began, what the college does best,…

  3. A Broad Mission, Clear Public Image, and Private Funding: Can Community Colleges Have It All? In Brief

    ERIC Educational Resources Information Center

    Sunderman, Judith A.

    2007-01-01

    Community colleges have an opportunity to engage in institutional advancement while conveying a timely and inspiring message, but the collective voice of the institution needs to be focused and clear. As a result, community colleges need to carefully evaluate their mission, public image, financial needs, and donor base in order to identify a…

  4. Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment

    PubMed Central

    Mitchel, J.A.; Martin, I.S.

    2013-01-01

    A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629

  5. Evaluation of a hyperspectral image database for demosaicking purposes

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed-Chaker; Süsstrunk, Sabine

    2011-01-01

    We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.

  6. Addressing the potential adverse effects of school-based BMI assessments on children's wellbeing.

    PubMed

    Gibbs, Lisa; O'Connor, Thea; Waters, Elizabeth; Booth, Michael; Walsh, Orla; Green, Julie; Bartlett, Jenny; Swinburn, Boyd

    2008-01-01

    INTRODUCTION. Do child obesity prevention research and intervention measures have the potential to generate adverse concerns about body image by focussing on food, physical activity and body weight? Research findings now demonstrate the emergence of body image concerns in children as young as 5 years. In the context of a large school-community-based child health promotion and obesity prevention study, we aimed to address the potential negative effects of height and weight measures on child wellbeing by developing and implementing an evidence-informed protocol to protect and prevent body image concerns. fun 'n healthy in Moreland! is a cluster randomised controlled trial of a child health promotion and obesity prevention intervention in 23 primary schools in an inner urban area of Melbourne, Australia. Body image considerations were incorporated into the study philosophies, aims, methods, staff training, language, data collection and reporting procedures of this study. This was informed by the published literature, professional body image expertise, pilot testing and implementation in the conduct of baseline data collection and the intervention. This study is the first record of a body image protection protocol being an integral part of the research processes of a child obesity prevention study. Whilst we are yet to measure its impact and outcome, we have developed and tested a protocol based on the evidence and with support from stakeholders in order to minimise the adverse impact of study processes on child body image concerns.

  7. An Imaging System capable of monitoring en-glacial and sub-glacial processes of glaciers, streaming ice and ice margins

    NASA Astrophysics Data System (ADS)

    Frearson, N.

    2012-12-01

    Columbia University in New York is developing a geophysical instrumentation package that is capable of monitoring dynamic en-glacial and sub-glacial processes. The instruments include a Riegl Scanning Laser for precise measurements of the ice surface elevation, Stereo photogrammetry from a high sensitivity (~20mK) Infra-Red camera and a high resolution Visible Imaging camera (2456 x 2058 pixels) to document fine scale ice temperature changes and surface features, near surface ice penetrating radar and an ice depth measuring radar that can be used to study interior and basal processes of ice shelves, glaciers, ice streams and ice-sheets. All instrument data sets will be time-tagged and geo-referenced using precision GPS satellite data. Aircraft orientation will be corrected using inertial measurement technology integrated into the pod. This instrumentation will be flown across some of the planets largest outlet glaciers in Antarctica and Greenland. However, a key aspect of the design is that at the conclusion of the program, the Pod, Deployment Arm, Data Acquisition and Power and Environmental Management system will become available for use by the science community at large to install their own instruments onto. It will also be possible to mount the Icepod onto other airframes. The sensor system will become part of a research facility operated for the science community, and data will be maintained at and made available through a Polar Data Center.

  8. Facilities for High Resolution Imaging of the Sun

    NASA Astrophysics Data System (ADS)

    von der Lühe, Oskar

    2018-04-01

    The Sun is the only star where physical processes can be observed at their intrinsic spatial scales. Even though the Sun in a mere 150 million km from Earth, it is difficult to resolve fundamental processes in the solar atmosphere, because they occur at scales of the order of the kilometer. They can be observed only with telescopes which have apertures of several meters. The current state-of-the-art are solar telescopes with apertures of 1.5 m which resolve 50 km on the solar surface, soon to be superseded by telescopes with 4 m apertures with 20 km resolution. The US American 4 m DSI Solar Telescope is currently constructed on Maui, Hawaii, and is expected to have first light in 2020. The European solar community collaborates intensively to pursue the 4 m European Solar Telescope with a construction start in the Canaries early in the next decade. Solar telescopes with slightly smaller are also in the planning by the Russian, Indian and Chinese communities. In order to achieve a resolution which approaches the diffraction limit, all modern solar telescopes use adaptive optics which compensates virtually any scene on the solar disk. Multi-conjugate adaptive optics designed to compensate fields of the order on one minute of arc have been demonstrated and will become a facility feature of the new telescopes. The requirements for high precision spectro-polarimetry – about one part in 104 – makes continuous monitoring of (MC)AO performance and post-processing image reconstruction methods a necessity.

  9. TerraLook: Providing easy, no-cost access to satellite images for busy people and the technologically disinclined

    USGS Publications Warehouse

    Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas

    2008-01-01

    Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.

  10. TerraLook: Providing easy, no-cost access to satellite images for busy people and the technologically disinclined

    USGS Publications Warehouse

    Geller, G.N.; Fosnight, E.A.; Chaudhuri, Sambhudas

    2007-01-01

    Access to satellite images has been largely limited to communities with specialized tools and expertise, even though images could also benefit other communities. This situation has resulted in underutilization of the data. TerraLook, which consists of collections of georeferenced JPEG images and an open source toolkit to use them, makes satellite images available to those lacking experience with remote sensing. Users can find, roam, and zoom images, create and display vector overlays, adjust and annotate images so they can be used as a communication vehicle, compare images taken at different times, and perform other activities useful for natural resource management, sustainable development, education, and other activities. ?? 2007 IEEE.

  11. National Land Imaging Requirements (NLIR) Pilot Project summary report: summary of moderate resolution imaging user requirements

    USGS Publications Warehouse

    Vadnais, Carolyn; Stensaas, Gregory

    2014-01-01

    Under the National Land Imaging Requirements (NLIR) Project, the U.S. Geological Survey (USGS) is developing a functional capability to obtain, characterize, manage, maintain and prioritize all Earth observing (EO) land remote sensing user requirements. The goal is a better understanding of community needs that can be supported with land remote sensing resources, and a means to match needs with appropriate solutions in an effective and efficient way. The NLIR Project is composed of two components. The first component is focused on the development of the Earth Observation Requirements Evaluation System (EORES) to capture, store and analyze user requirements, whereas, the second component is the mechanism and processes to elicit and document the user requirements that will populate the EORES. To develop the second component, the requirements elicitation methodology was exercised and refined through a pilot project conducted from June to September 2013. The pilot project focused specifically on applications and user requirements for moderate resolution imagery (5–120 meter resolution) as the test case for requirements development. The purpose of this summary report is to provide a high-level overview of the requirements elicitation process that was exercised through the pilot project and an early analysis of the moderate resolution imaging user requirements acquired to date to support ongoing USGS sustainable land imaging study needs. The pilot project engaged a limited set of Federal Government users from the operational and research communities and therefore the information captured represents only a subset of all land imaging user requirements. However, based on a comparison of results, trends, and analysis, the pilot captured a strong baseline of typical applications areas and user needs for moderate resolution imagery. Because these results are preliminary and represent only a sample of users and application areas, the information from this report should only be used to indicate general user needs for the applications covered. Users of the information are cautioned that use of specific numeric results may be inappropriate without additional research. Any information used or cited from this report should specifically be cited as preliminary findings.

  12. Caring for the community.

    PubMed

    Spitzer, Roxane

    2004-01-01

    Nurses can play a unique role in caring for their communities. The first and most obvious role is the direct care of patients, the underlying raison d'etre of nursing, and second is the indirect care of the patients' families and friends. The hands-on healing image of nurses is held by many people and personified through the years by such real-life examples as Clara Barton. It is also the image that attracts many to nursing and is fueled by desire--the desire to help, to make a positive difference, and to serve people. It is often a powerful one-on-one connection between caregiver and receiver, nurse and patient, that defines the role of nursing. Yet, nurses can--and should--play broader roles in caring for their communities. This includes the internal community within one's own organization, the environment in which nurses work, and the larger external community--or communities--in which one lives. By reaching out and caring for the broader communities, nurses have the opportunity to grow while the communities benefit from their participatory caring. In addition, the image of nursing is enhanced externally. The nurse as community caregiver melds the heart and soul of nursing for a new 21st century model of caring.

  13. Data publication and sharing using the SciDrive service

    NASA Astrophysics Data System (ADS)

    Mishin, Dmitry; Medvedev, D.; Szalay, A. S.; Plante, R. L.

    2014-01-01

    Despite the last years progress in scientific data storage, still remains the problem of public data storage and sharing system for relatively small scientific datasets. These are collections forming the “long tail” of power log datasets distribution. The aggregated size of the long tail data is comparable to the size of all data collections from large archives, and the value of data is significant. The SciDrive project's main goal is providing the scientific community with a place to reliably and freely store such data and provide access to it to broad scientific community. The primary target audience of the project is astoromy community, and it will be extended to other fields. We're aiming to create a simple way of publishing a dataset, which can be then shared with other people. Data owner controls the permissions to modify and access the data and can assign a group of users or open the access to everyone. The data contained in the dataset will be automaticaly recognized by a background process. Known data formats will be extracted according to the user's settings. Currently tabular data can be automatically extracted to the user's MyDB table where user can make SQL queries to the dataset and merge it with other public CasJobs resources. Other data formats can be processed using a set of plugins that upload the data or metadata to user-defined side services. The current implementation targets some of the data formats commonly used by the astronomy communities, including FITS, ASCII and Excel tables, TIFF images, and YT simulations data archives. Along with generic metadata, format-specific metadata is also processed. For example, basic information about celestial objects is extracted from FITS files and TIFF images, if present. A 100TB implementation has just been put into production at Johns Hopkins University. The system features public data storage REST service supporting VOSpace 2.0 and Dropbox protocols, HTML5 web portal, command-line client and Java standalone client to synchronize a local folder with the remote storage. We use VAO SSO (Single Sign On) service from NCSA for users authentication that provides free registration for everyone.

  14. Super-Resolution Imaging Strategies for Cell Biologists Using a Spinning Disk Microscope

    PubMed Central

    Hosny, Neveen A.; Song, Mingying; Connelly, John T.; Ameer-Beg, Simon; Knight, Martin M.; Wheeler, Ann P.

    2013-01-01

    In this study we use a spinning disk confocal microscope (SD) to generate super-resolution images of multiple cellular features from any plane in the cell. We obtain super-resolution images by using stochastic intensity fluctuations of biological probes, combining Photoactivation Light-Microscopy (PALM)/Stochastic Optical Reconstruction Microscopy (STORM) methodologies. We compared different image analysis algorithms for processing super-resolution data to identify the most suitable for analysis of particular cell structures. SOFI was chosen for X and Y and was able to achieve a resolution of ca. 80 nm; however higher resolution was possible >30 nm, dependant on the super-resolution image analysis algorithm used. Our method uses low laser power and fluorescent probes which are available either commercially or through the scientific community, and therefore it is gentle enough for biological imaging. Through comparative studies with structured illumination microscopy (SIM) and widefield epifluorescence imaging we identified that our methodology was advantageous for imaging cellular structures which are not immediately at the cell-substrate interface, which include the nuclear architecture and mitochondria. We have shown that it was possible to obtain two coloured images, which highlights the potential this technique has for high-content screening, imaging of multiple epitopes and live cell imaging. PMID:24130668

  15. An innovative and shared methodology for event reconstruction using images in forensic science.

    PubMed

    Milliet, Quentin; Jendly, Manon; Delémont, Olivier

    2015-09-01

    This study presents an innovative methodology for forensic science image analysis for event reconstruction. The methodology is based on experiences from real cases. It provides real added value to technical guidelines such as standard operating procedures (SOPs) and enriches the community of practices at stake in this field. This bottom-up solution outlines the many facets of analysis and the complexity of the decision-making process. Additionally, the methodology provides a backbone for articulating more detailed and technical procedures and SOPs. It emerged from a grounded theory approach; data from individual and collective interviews with eight Swiss and nine European forensic image analysis experts were collected and interpreted in a continuous, circular and reflexive manner. Throughout the process of conducting interviews and panel discussions, similarities and discrepancies were discussed in detail to provide a comprehensive picture of practices and points of view and to ultimately formalise shared know-how. Our contribution sheds light on the complexity of the choices, actions and interactions along the path of data collection and analysis, enhancing both the researchers' and participants' reflexivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Ground-based thermography of fluvial systems at low and high discharge reveals potential complex thermal heterogeneity driven by flow variation and bioroughness

    USGS Publications Warehouse

    Cardenas, M.B.; Harvey, J.W.; Packman, A.I.; Scott, D.T.

    2008-01-01

    Temperature is a primary physical and biogeochemical variable in aquatic systems. Field-based measurement of temperature at discrete sampling points has revealed temperature variability in fluvial systems, but traditional techniques do not readily allow for synoptic sampling schemes that can address temperature-related questions with broad, yet detailed, coverage. We present results of thermal infrared imaging at different stream discharge (base flow and peak flood) conditions using a handheld IR camera. Remotely sensed temperatures compare well with those measured with a digital thermometer. The thermal images show that periphyton, wood, and sandbars induce significant thermal heterogeneity during low stages. Moreover, the images indicate temperature variability within the periphyton community and within the partially submerged bars. The thermal heterogeneity was diminished during flood inundation, when the areas of more slowly moving water to the side of the stream differed in their temperature. The results have consequences for thermally sensitive hydroelogical processes and implications for models of those processes, especially those that assume an effective stream temperature. Copyright ?? 2008 John Wiley & Sons, Ltd.

  17. Spitzer Telemetry Processing System

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.

    2013-01-01

    The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.

  18. Big, Deep, and Smart Data in Scanning Probe Microscopy

    DOE PAGES

    Kalinin, Sergei V.; Strelcov, Evgheni; Belianinov, Alex; ...

    2016-09-27

    Scanning probe microscopy techniques open the door to nanoscience and nanotechnology by enabling imaging and manipulation of structure and functionality of matter on nanometer and atomic scales. We analyze the discovery process by SPM in terms of information flow from tip-surface junction to the knowledge adoption by scientific community. Furthermore, we discuss the challenges and opportunities offered by merging of SPM and advanced data mining, visual analytics, and knowledge discovery technologies.

  19. Hyperspectral Imaging Sensors and the Marine Coastal Zone

    NASA Technical Reports Server (NTRS)

    Richardson, Laurie L.

    2000-01-01

    Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.

  20. Naval sensor data database (NSDD)

    NASA Astrophysics Data System (ADS)

    Robertson, Candace J.; Tubridy, Lisa H.

    1999-08-01

    The Naval Sensor Data database (NSDD) is a multi-year effort to archive, catalogue, and disseminate data from all types of sensors to the mine warfare, signal and image processing, and sensor development communities. The purpose is to improve and accelerate research and technology. Providing performers with the data required to develop and validate improvements in hardware, simulation, and processing will foster advances in sensor and system performance. The NSDD will provide a centralized source of sensor data in its associated ground truth, which will support an improved understanding will be benefited in the areas of signal processing, computer-aided detection and classification, data compression, data fusion, and geo-referencing, as well as sensor and sensor system design.

  1. The Medical Imaging Interaction Toolkit: challenges and advances : 10 years of open-source development.

    PubMed

    Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo

    2013-07-01

    The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.

  2. PACS: implementation in the U.S. Department of Defense

    NASA Astrophysics Data System (ADS)

    Chacko, Anna K.; Wider, Ronald; Romlein, John R.; Cawthon, Michael A.; Richardson, Ronald R., Jr.; Lollar, H. William; Cook, Jay F.; Timboe, Harold L.; Johnson, Thomas G.; Fellows, Douglas W.

    2000-05-01

    The Department of Defense has been a leader in Radiology re- engineering for the past decade. Efforts have included the development of two landmark PACS specifications (MDIS and DIN- PACS), respective vendor selection and implementation programs. A Tri-Service (Army, Navy and Air Force) Radiology re-engineering program was initiated which identified transitioning to digital imaging, PACS and teleradiology as key enabling technologies in a changing business scenario. Subsequently, the systematic adjustment of procurement process for radiological imaging equipment included a focus on specifying PACS-capable-digital imaging modalities and mini- PACS as stepping stones to make the hospitals and health clinics PACS-ready. The success of the PACS and teleradiology program in the DOD is evidenced by the near filmless operation of most Army and Air Force Medical Centers, several community hospitals and several operational teleradiology constellations. Additionally, the MDIS PACSystem has become the commercial PACS product for General Electric Medical Systems. The DOD continues to forge ahead in the PACS arena by implementing advanced configurations and operational concepts such as the VRE (Virtual Radiology Environment), the negotiation of Regional Archiving and Regional PACS Maintenance Programs. Newer regulations (HIPAA, the FDA approval of digital mammography) have been promulgated impacting the culture and conduct of our business. Incorporating their requirements at the very outset will enable us to streamline the delivery of radiology. The DOD community has embraced the information age at multiple levels. The Healthcare portion of this community with these initiatives is integrating itself into DOD's future. The future holds great possibilities, promises and challenges for the DOD PACS programs.

  3. Building Petascale Cyberinfrastructure and Science Support for Solar Physics: Approach of the DKIST Data Center

    NASA Astrophysics Data System (ADS)

    Berukoff, Steven; Reardon, Kevin; Hays, Tony; Spiess, DJ; Watson, Fraser

    2015-08-01

    When construction is complete in 2019, the Daniel K. Inouye Solar Telescope will be the most-capable large aperture, high-resolution, multi-instrument solar physics facility in the world. The telescope is designed as a four-meter off-axis Gregorian, with a rotating Coude laboratory designed to simultaneously house and support five first-light imaging and spectropolarimetric instruments. At current design, the facility and its instruments will generate data volumes of 5 PB, produce 108 images, and 107-109 metadata elements annually. This data will not only forge new understanding of solar phenomena at high resolution, but enhance participation in solar physics and further grow a small but vibrant international community.The DKIST Data Center is being designed to store, curate, and process this flood of information, while augmenting its value by providing association of science data and metadata to its acquisition and processing provenance. In early Operations, the Data Center will produce, by autonomous, semi-automatic, and manual means, quality-controlled and -assured calibrated data sets, closely linked to facility and instrument performance during the Operations lifecycle. These data sets will be made available to the community openly and freely, and software and algorithms made available through community repositories like Github for further collaboration and improvement.We discuss the current design and approach of the DKIST Data Center, describing the development cycle, early technology analysis and prototyping, and the roadmap ahead. In this budget-conscious era, a key design criterion is elasticity, the ability of the built system to adapt to changing work volumes, types, and the shifting scientific landscape, without undue cost or operational impact. We discuss our deep iterative development approach, the underappreciated challenges of calibrating ground-based solar data, the crucial integration of the Data Center within the larger Operations lifecycle, and how software and hardware support, intelligently deployed, will enable high-caliber solar physics research and community growth for the DKIST's 40-year lifespan.

  4. Technologies for Nondestructive Evaluation of Surfaces and Thin Coatings

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The effort included in this project included several related activities encompassing basic understanding, technological development, customer identification and commercial transfer of several methodologies for nondestructive evaluation of surfaces and thin surface coatings. Consistent with the academic environment, students were involved in the effort working with established investigators to further their training, provide a nucleus of experienced practitioners in the new technologies during their industrial introduction, and utilize their talents for project goals. As will be seen in various portions of the report, some of the effort has led to commercialization. This process has spawned other efforts related to this project which are supported from outside sources. These activities are occupying the efforts of some of the people who were previously supported within this grant and its predecessors. The most advanced of the supported technologies is thermography, for which the previous joint efforts of the investigators and NASA researchers have developed several techniques for extending the utility of straight thermographic inspection by producing methods of interpretation and analysis accessible to automatic image processing with computer data analysis. The effort reported for this technology has been to introduce the techniques to new user communities, who are then be able to add to the effective uses of existing products with only slight development work. In a related development, analysis of a thermal measurement situation in past efforts led to a new insight into the behavior of simple temperature probes. This insight, previously reported to the narrow community in which the particular measurement was made, was reported to the community of generic temperature measurement experts this year. In addition to the propagation of mature thermographic techniques, the development of a thermoelastic imaging system has been an important related development. Part of the work carried out in the effort reported here has been to prepare reports introducing the newly commercially available thermoelastic measurements to the appropriate user communities.

  5. Photovoice ethics: perspectives from Flint Photovoice.

    PubMed

    Wang, C C; Redwood-Jones, Y A

    2001-10-01

    Photovoice is a participatory health promotion strategy in which people use cameras to document their health and work realities. As participants engage in a group process of critical reflection, they may advocate for change in their communities by using the power of their images and stories to communicate with policy makers. In public health initiatives from China to California, community people have used photovoice to carry out participatory needs assessment, conduct participatory evaluation, and reach policy makers to improve community health. This article begins to address ethical issues raised by the use of photovoice: the potential for invasion of privacy and how that may be prevented; issues in recruitment, representation, participation, and advocacy; and specific methodological techniques that should be used to minimize participants' risks and to maximize benefits. The authors describe lessons learned from the large-scale Flint Photovoice involving youth, adults, and policy makers.

  6. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  7. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  8. New techniques for fluorescence background rejection in microscopy and endoscopy

    NASA Astrophysics Data System (ADS)

    Ventalon, Cathie

    2009-03-01

    Confocal microscopy is a popular technique in the bioimaging community, mainly because it provides optical sectioning. However, its standard implementation requires 3-dimensional scanning of focused illumination throughout the sample. Efficient non-scanning alternatives have been implemented, among which the simple and well-established incoherent structured illumination microscopy (SIM) [1]. We recently proposed a similar technique, called Dynamic Speckle Illumination (DSI) microscopy, wherein the incoherent grid illumination pattern is replaced with a coherent speckle illumination pattern from a laser, taking advantage of the fact that speckle contrast is highly maintained in a scattering media, making the technique well adapted to tissue imaging [2]. DSI microscopy relies on the illumination of a sample with a sequence of dynamic speckle patterns and an image processing algorithm based only on an a priori knowledge of speckle statistics. The choice of this post-processing algorithm is crucial to obtain a good sectioning strength: in particular, we developed a novel post-processing algorithm based one wavelet pre-filtering of the raw images and obtained near-confocal fluorescence sectioning in a mouse brain labeled with GFP, with a good image quality maintained throughout a depth of ˜100 μm [3]. In the purpose of imaging fluorescent tissue at higher depth, we recently applied structured illumination to endoscopy. We used a similar set-up wherein the illumination pattern (a one-dimensional grid) is transported to the sample with an imaging fiber bundle with miniaturized objective and the fluorescence image is collected through the same bundle. Using a post-processing algorithm similar to the one previously described [3], we obtained high-quality images of a fluorescein-labeled rat colonic mucosa [4], establishing the potential of our endomicroscope for bioimaging applications. [4pt] Ref: [0pt] [1] M. A. A. Neil et al, Opt. Lett. 22, 1905 (1997) [0pt] [2] C. Ventalon et al, Opt. Lett. 30, 3350 (2005) [0pt] [3] C. Ventalon et al, Opt. Lett. 32, 1417 (2007) [0pt] [4] N. Bozinovic et al, Opt. Express 16, 8016 (2008)

  9. Demonstrating the Value of Near Real-time Satellite-based Earth Observations in a Research and Education Framework

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.

    2017-12-01

    The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.

  10. RabbitQR: fast and flexible big data processing at LSST data rates using existing, shared-use hardware

    NASA Astrophysics Data System (ADS)

    Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi

    2016-08-01

    Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.

  11. WE-E-204-03: Radiology and Other Imaging Journals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karellas, A.

    Research papers authored by Medical Physicists address a large spectrum of oncologic, imaging, or basic research problems; exploit a wide range of physical and engineering methodologies; and often describe the efforts of a multidisciplinary research team. Given dozens of competing journals accepting medical physics articles, it may not be clear to an individual author which journal is the best venue for disseminating their work to the scientific community. Relevant factors usually include the Journal’s audience and scientific impact, but also such factors as perceived acceptance rate, interest in their topic, and quality of service. The purpose of this symposium ismore » to provide the medical physics community with an overview of scope, review processes, and article guidelines for the following journals: Radiology, Medical Physics, International Journal of Radiation Biology and Physics, Journal of Applied Clinical Medical Physics, and Practical Radiation Oncology. Senior members of the editorial board for each journal will provide details as to the journals review process, for example: single blind versus double blind reviews; open access policies, the hierarchy of the review process in terms of editorial board structure; the reality of acceptance, in terms of acceptance rate; and the types of research the journal prefers to publish. Other journals will be discussed as well. The goal is to provide for authors guidance before they begin to write their papers, not only for proper formatting, but also that the readership is appropriate for the particular paper, hopefully increasing the quality and impact of the paper and the likelihood of publication. Learning Objectives: To review each Journal’s submission and review process Guidance as to how to increase quality, impact and chances of acceptance To help decipher which journal is appropriate for a given work A. Karellas, Research collaboration with Koning, Corporation.« less

  12. Development and Operation of the Americas ALOS Data Node

    NASA Astrophysics Data System (ADS)

    Arko, S. A.; Marlin, R. H.; La Belle-Hamer, A. L.

    2004-12-01

    In the spring of 2005, the Japanese Aerospace Exploration Agency (JAXA) will launch the next generation in advanced, remote sensing satellites. The Advanced Land Observing Satellite (ALOS) includes three sensors, two visible imagers and one L-band polarimetric SAR, providing high-quality remote sensing data to the scientific and commercial communities throughout the world. Focusing on remote sensing and scientific pursuits, ALOS will image nearly the entire Earth using all three instruments during its expected three-year lifetime. These data sets offer the potential for data continuation of older satellite missions as well as new products for the growing user community. One of the unique features of the ALOS mission is the data distribution approach. JAXA has created a worldwide cooperative data distribution network. The data nodes are NOAA /ASF representing the Americas ALOS Data Node (AADN), ESA representing the ALOS European and African Node (ADEN), Geoscience Australia representing Oceania and JAXA representing the Asian continent. The AADN is the sole agency responsible for archival, processing and distribution of L0 and L1 products to users in both North and South America. In support of this mission, AADN is currently developing a processing and distribution infrastructure to provide easy access to these data sets. Utilizing a custom, grid-based process controller and media generation system, the overall infrastructure has been designed to provide maximum throughput while requiring a minimum of operator input and maintenance. This paper will present an overview of the ALOS system, details of each sensor's capabilities and of the processing and distribution system being developed by AADN to provide these valuable data sets to users throughout North and South America.

  13. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.

  14. Solar System Studies with the Space Infrared Telescope Facility (SIRTF)

    NASA Technical Reports Server (NTRS)

    Cruikshank, Dale P.; DeVincenzi, Donald L. (Technical Monitor)

    1998-01-01

    SIRTF (Space Infrared Telescope Facility) is the final element in NASA's 'Great Observatories' program. It consists of an 85-cm cryogenically-cooled observatory for infrared astronomy from space. SIRTF is scheduled for launch in late 2001 or early 2002 on a Delta rocket into a heliocentric orbit trailing the Earth. Data from SIRTF will be processed and disseminated to the community through the SIRTF Science Center (SSC) located at the Infrared Processing and Analysis Center (IPAC) at Caltech. Some 80/% of the total observing time (estimated at a minimum of 7500 hours of integration time per year for the mission lifetime of about 4 years) will be available to the scientific community at large through a system of refereed proposals. Three basic instruments are located in the SIRTF focal plane. The Multiband Imaging Photometer (MIPS), the Infrared Array Camera (IRAC), and the Infrared Spectrometer (IRS), taken together, provide imaging and spectroscopy from 3.5 to 160 microns. Among the solar system studies suited to SIRTF are the following: 1) spectroscopy and radiometry of small bodies from the asteroid main belt, through the Trojan clouds, to the Kuiper Disk; 2) dust distribution in the zodiacal cloud and the Earth's heliocentric dust ring; 3) spectroscopy and radiometry of comets; and 4) spectroscopy and radiometry of planets and their satellites. Searches for, and studies of dust disks around other stars, brown dwarfs, and superplanets will also be conducted with SIRTF. The SORTIE web site (http://ssc.ipac.caltech.edu/sirtf) contains important details and documentation on the project, the spacecraft, the telescope, instruments, and observing procedures. A community-wide workshop for solar system studies with SIRTF is in the planning stages by the author and Martha S. Hanner for the summer of 1999.

  15. Computational tissue volume reconstruction of a peripheral nerve using high-resolution light-microscopy and reconstruct.

    PubMed

    Gierthmuehlen, Mortimer; Freiman, Thomas M; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T T

    2013-01-01

    The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our "Virtual workbench" project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community.

  16. Computational Tissue Volume Reconstruction of a Peripheral Nerve Using High-Resolution Light-Microscopy and Reconstruct

    PubMed Central

    Gierthmuehlen, Mortimer; Freiman, Thomas M.; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T. T.

    2013-01-01

    The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our “Virtual workbench” project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community. PMID:23785485

  17. Technical Note: DIRART – A software suite for deformable image registration and adaptive radiotherapy research

    PubMed Central

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.

    2011-01-01

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176

  18. Non-invasive long-term fluorescence live imaging of Tribolium castaneum embryos.

    PubMed

    Strobl, Frederic; Stelzer, Ernst H K

    2014-06-01

    Insect development has contributed significantly to our understanding of metazoan development. However, most information has been obtained by analyzing a single species, the fruit fly Drosophila melanogaster. Embryonic development of the red flour beetle Tribolium castaneum differs fundamentally from that of Drosophila in aspects such as short-germ development, embryonic leg development, extensive extra-embryonic membrane formation and non-involuted head development. Although Tribolium has become the second most important insect model organism, previous live imaging attempts have addressed only specific questions and no long-term live imaging data of Tribolium embryogenesis have been available. By combining light sheet-based fluorescence microscopy with a novel mounting method, we achieved complete, continuous and non-invasive fluorescence live imaging of Tribolium embryogenesis at high spatiotemporal resolution. The embryos survived the 2-day or longer imaging process, developed into adults and produced fertile progeny. Our data document all morphogenetic processes from the rearrangement of the uniform blastoderm to the onset of regular muscular movement in the same embryo and in four orientations, contributing significantly to the understanding of Tribolium development. Furthermore, we created a comprehensive chronological table of Tribolium embryogenesis, integrating most previous work and providing a reference for future studies. Based on our observations, we provide evidence that serosa window closure and serosa opening, although deferred by more than 1 day, are linked. All our long-term imaging datasets are available as a resource for the community. Tribolium is only the second insect species, after Drosophila, for which non-invasive long-term fluorescence live imaging has been achieved. © 2014. Published by The Company of Biologists Ltd.

  19. Data to Pictures to Data: Outreach Imaging Software and Metadata

    NASA Astrophysics Data System (ADS)

    Levay, Z.

    2011-07-01

    A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.

  20. MITK global tractography

    NASA Astrophysics Data System (ADS)

    Neher, Peter F.; Stieltjes, Bram; Reisert, Marco; Reicht, Ignaz; Meinzer, Hans-Peter; Fritzsche, Klaus H.

    2012-02-01

    Fiber tracking algorithms yield valuable information for neurosurgery as well as automated diagnostic approaches. However, they have not yet arrived in the daily clinical practice. In this paper we present an open source integration of the global tractography algorithm proposed by Reisert et.al.1 into the open source Medical Imaging Interaction Toolkit (MITK) developed and maintained by the Division of Medical and Biological Informatics at the German Cancer Research Center (DKFZ). The integration of this algorithm into a standardized and open development environment like MITK enriches accessibility of tractography algorithms for the science community and is an important step towards bringing neuronal tractography closer to a clinical application. The MITK diffusion imaging application, downloadable from www.mitk.org, combines all the steps necessary for a successful tractography: preprocessing, reconstruction of the images, the actual tracking, live monitoring of intermediate results, postprocessing and visualization of the final tracking results. This paper presents typical tracking results and demonstrates the steps for pre- and post-processing of the images.

  1. MEG-BIDS, the brain imaging data structure extended to magnetoencephalography

    PubMed Central

    Niso, Guiomar; Gorgolewski, Krzysztof J.; Bock, Elizabeth; Brooks, Teon L.; Flandin, Guillaume; Gramfort, Alexandre; Henson, Richard N.; Jas, Mainak; Litvak, Vladimir; T. Moreau, Jeremy; Oostenveld, Robert; Schoffelen, Jan-Mathijs; Tadel, Francois; Wexler, Joseph; Baillet, Sylvain

    2018-01-01

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEG-BIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several data-analytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone. PMID:29917016

  2. MEG-BIDS, the brain imaging data structure extended to magnetoencephalography.

    PubMed

    Niso, Guiomar; Gorgolewski, Krzysztof J; Bock, Elizabeth; Brooks, Teon L; Flandin, Guillaume; Gramfort, Alexandre; Henson, Richard N; Jas, Mainak; Litvak, Vladimir; T Moreau, Jeremy; Oostenveld, Robert; Schoffelen, Jan-Mathijs; Tadel, Francois; Wexler, Joseph; Baillet, Sylvain

    2018-06-19

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEG-BIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several data-analytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone.

  3. Recent advances in nondestructive evaluation made possible by novel uses of video systems

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.

    1990-01-01

    Complex materials are being developed for use in future advanced aerospace systems. High temperature materials have been targeted as a major area of materials development. The development of composites consisting of ceramic matrix and ceramic fibers or whiskers is currently being aggressively pursued internationally. These new advanced materials are difficult and costly to produce; however, their low density and high operating temperature range are needed for the next generation of advanced aerospace systems. These materials represent a challenge to the nondestructive evaluation community. Video imaging techniques not only enhance the nondestructive evaluation, but they are also required for proper evaluation of these advanced materials. Specific research examples are given, highlighting the impact that video systems have had on the nondestructive evaluation of ceramics. An image processing technique for computerized determination of grain and pore size distribution functions from microstructural images is discussed. The uses of video and computer systems for displaying, evaluating, and interpreting ultrasonic image data are presented.

  4. Investigating the relationship between peat biogeochemistry and above-ground plant phenology with remote sensing along a gradient of permafrost thaw.

    NASA Astrophysics Data System (ADS)

    Garnello, A.; Dye, D. G.; Bogle, R.; Hough, M.; Raab, N.; Dominguez, S.; Rich, V. I.; Crill, P. M.; Saleska, S. R.

    2016-12-01

    Global climate models predict a 50% - 85% decrease in permafrost area in northern regions by 2100 due to increased temperature and precipitation variability, potentially releasing large stores of carbon as greenhouse gases (GHG) due to microbial activity. Linking belowground biogeochemical processes with observable above ground plant dynamics would greatly increase the ability to track and model GHG emissions from permafrost thaw, but current research has yet to satisfactorily develop this link. We hypothesized that seasonal patterns in peatland biogeochemistry manifests itself as observable plant phenology due to the tight coupling resulting from plant-microbial interactions. We tested this by using an automated, tower-based camera to acquire daily composite (red, green, blue) and near infrared (NIR) images of a thawing permafrost peatland site near Abisko, Sweden. The images encompassed a range of exposures which were merged into high-dynamic-range images, a novel application to remote sensing of plant phenology. The 2016 growing season camera images are accompanied by mid-to-late season CH4 and CO2 fluxes measured from soil collars, and by early-mid-late season peat core samples of the composition of microbial communities and key metabolic genes, and of the organic matter and trace gas composition of peat porewater. Additionally, nearby automated gas flux chambers measured sub-hourly fluxes of CO2 and CH4 from the peat, which will also be incorporated into analysis of relationships between seasonal camera-derived vegetation indices and gas fluxes from habitats with different vegetation types. While remote sensing is a proven method in observing plant phenology, this technology has yet to be combined with soil biogeochemical and microbial community data in regions of permafrost thaw. Establishing a high resolution phenology monitoring system linked to soil biogeochemical processes in subarctic peatlands will advance the understanding of how observable patterns in plant phenology can be used to monitor permafrost thaw and ecosystem carbon cycling.

  5. Reprocessing the Historical Satellite Passive Microwave Record at Enhanced Spatial Resolutions using Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Long, D. G.; Paget, A. C.; Armstrong, R. L.

    2015-12-01

    Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Currently available global gridded passive microwave data sets serve a diverse community of hundreds of data users, but do not meet many requirements of modern Earth System Data Records (ESDRs) or Climate Data Records (CDRs), most notably in the areas of intersensor calibration, quality-control, provenance and consistent processing methods. The original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. Further, since the first Level 3 data sets were produced, the Level 2 passive microwave data on which they were based have been reprocessed as Fundamental CDRs (FCDRs) with improved calibration and documentation. We are funded by NASA MEaSUREs to reprocess the historical gridded data sets as EASE-Grid 2.0 ESDRs, using the most mature available Level 2 satellite passive microwave (SMMR, SSM/I-SSMIS, AMSR-E) records from 1978 to the present. We have produced prototype data from SSM/I and AMSR-E for the year 2003, for review and feedback from our Early Adopter user community. The prototype data set includes conventional, low-resolution ("drop-in-the-bucket" 25 km) grids and enhanced-resolution grids derived from the two candidate image reconstruction techniques we are evaluating: 1) Backus-Gilbert (BG) interpolation and 2) a radiometer version of Scatterometer Image Reconstruction (SIR). We summarize our temporal subsetting technique, algorithm tuning parameters and computational costs, and include sample SSM/I images at enhanced resolutions of up to 3 km. We are actively working with our Early Adopters to finalize content and format of this new, consistently-processed high-quality satellite passive microwave ESDR.

  6. Big, Deep, and Smart Data in Scanning Probe Microscopy.

    PubMed

    Kalinin, Sergei V; Strelcov, Evgheni; Belianinov, Alex; Somnath, Suhas; Vasudevan, Rama K; Lingerfelt, Eric J; Archibald, Richard K; Chen, Chaomei; Proksch, Roger; Laanait, Nouamane; Jesse, Stephen

    2016-09-27

    Scanning probe microscopy (SPM) techniques have opened the door to nanoscience and nanotechnology by enabling imaging and manipulation of the structure and functionality of matter at nanometer and atomic scales. Here, we analyze the scientific discovery process in SPM by following the information flow from the tip-surface junction, to knowledge adoption by the wider scientific community. We further discuss the challenges and opportunities offered by merging SPM with advanced data mining, visual analytics, and knowledge discovery technologies.

  7. The gene expression database for mouse development (GXD): putting developmental expression information at your fingertips.

    PubMed

    Smith, Constance M; Finger, Jacqueline H; Kadin, James A; Richardson, Joel E; Ringwald, Martin

    2014-10-01

    Because molecular mechanisms of development are extraordinarily complex, the understanding of these processes requires the integration of pertinent research data. Using the Gene Expression Database for Mouse Development (GXD) as an example, we illustrate the progress made toward this goal, and discuss relevant issues that apply to developmental databases and developmental research in general. Since its first release in 1998, GXD has served the scientific community by integrating multiple types of expression data from publications and electronic submissions and by making these data freely and widely available. Focusing on endogenous gene expression in wild-type and mutant mice and covering data from RNA in situ hybridization, in situ reporter (knock-in), immunohistochemistry, reverse transcriptase-polymerase chain reaction, Northern blot, and Western blot experiments, the database has grown tremendously over the years in terms of data content and search utilities. Currently, GXD includes over 1.4 million annotated expression results and over 260,000 images. All these data and images are readily accessible to many types of database searches. Here we describe the data and search tools of GXD; explain how to use the database most effectively; discuss how we acquire, curate, and integrate developmental expression information; and describe how the research community can help in this process. Copyright © 2014 The Authors Developmental Dynamics published by Wiley Periodicals, Inc. on behalf of American Association of Anatomists.

  8. Generation of Digital Surface Models from satellite photogrammetry: the DSM-OPT service of the ESA Geohazards Exploitation Platform (GEP)

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Michéa, David; Malet, Jean-Philippe

    2017-04-01

    The continuously increasing fleet of agile stereo-capable very-high resolution (VHR) optical satellites has facilitated the acquisition of multi-view images of the earth surface. Theoretical revisit times have been reduced to less than one day and the highest spatial resolution which is commercially available amounts now to 30 cm/pixel. Digital Surface Models (DSM) and point clouds computed from such satellite stereo-acquisitions can provide valuable input for studies in geomorphology, tectonics, glaciology, hydrology and urban remote sensing The photogrammetric processing, however, still requires significant expertise, computational resources and costly commercial software. To enable a large Earth Science community (researcher and end-users) to process easily and rapidly VHR multi-view images, the work targets the implementation of a fully automatic satellite-photogrammetry pipeline (i.e DSM-OPT) on the ESA Geohazards Exploitation Platform (GEP). The implemented pipeline is based on the open-source photogrammetry library MicMac [1] and is designed for distributed processing on a cloud-based infrastructure. The service can be employed in pre-defined processing modes (i.e. urban, plain, hilly, and mountainous environments) or in an advanced processing mode (i.e. in which expert-users have the possibility to adapt the processing parameters to their specific applications). Four representative use cases are presented to illustrate the accuracy of the resulting surface models and ortho-images as well as the overall processing time. These use cases consisted of the construction of surface models from series of Pléiades images for four applications: urban analysis (Strasbourg, France), landslide detection in mountainous environments (South French Alps), co-seismic deformation in mountain environments (Central Italy earthquake sequence of 2016) and fault recognition for paleo-tectonic analysis (North-East India). Comparisons of the satellite-derived topography to airborne LiDAR topography are discussed. [1] Rupnik, E., Pierrot Deseilligny, M., Delorme, A., and Klinger, Y.: Refined satellite image orientation in the free open-source photogrammetric tools APERO/MICMAC, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-1, 83-90, doi:10.5194/isprs-annals-III-1-83-2016, 2016.

  9. Mineralogical Mapping of Asteroid Itokawa using Calibrated Hayabusa AMICA images and NIRS Spectrometer Data

    NASA Astrophysics Data System (ADS)

    Le Corre, Lucille; Becker, Kris J.; Reddy, Vishnu; Li, Jian-Yang; Bhatt, Megha

    2016-10-01

    The goal of our work is to restore data from the Hayabusa spacecraft that is available in the Planetary Data System (PDS) Small Bodies Node. More specifically, our objectives are to radiometrically calibrate and photometrically correct AMICA (Asteroid Multi-Band Imaging Camera) images of Itokawa. The existing images archived in the PDS are not in reflectance and not corrected from the effect of viewing geometry. AMICA images are processed with the Integrated Software for Imagers and Spectrometers (ISIS) system from USGS, widely used for planetary image analysis. The processing consists in the ingestion of the images in ISIS (amica2isis), updates to AMICA start time (sumspice), radiometric calibration (amicacal) including smear correction, applying SPICE ephemeris, adjusting control using Gaskell SUMFILEs (sumspice), projecting individual images (cam2map) and creating global or local mosaics. The application amicacal has also an option to remove pixels corresponding to the polarizing filters on the left side of the image frame. The amicacal application will include a correction for the Point Spread Function. The last version of the PSF published by Ishiguro et al. in 2014 includes correction for the effect of scattered light. This effect is important to correct because it can add 10% level in error and is affecting mostly the longer wavelength filters such as zs and p. The Hayabusa team decided to use the color data for six of the filters for scientific analysis after correcting for the scattered light. We will present calibrated data in I/F for all seven AMICA color filters. All newly implemented ISIS applications and map projections from this work have been or will be distributed to the community via ISIS public releases. We also processed the NIRS spectrometer data, and we will perform photometric modeling, then apply photometric corrections, and finally extract mineralogical parameters. The end results will be the creation of pyroxene chemistry and olivine/pyroxene ratio maps of Itokawa using NIRS and AMICA map products. All the products from this work will be archived on the PDS website. This work was supported by NASA Planetary Missions Data Analysis Program grant NNX13AP27G.

  10. Further development of image processing algorithms to improve detectability of defects in Sonic IR NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2017-02-01

    Sonic Infrared imaging (SIR) technology is a relatively new NDE technique that has received significant acceptance in the NDE community. SIR NDE is a super-fast, wide range NDE method. The technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in the structures under inspection. Defects become visible to the IR camera when the temperature in the crack vicinity increases due to various heating mechanisms in the specimen. Defect detection is highly affected by noise levels as well as mode patterns in the image. Mode patterns result from the superposition of sonic waves interfering within the specimen during the application of sound pulse. Mode patterns can be a serious concern, especially in composite structures. Mode patterns can either mimic real defects in the specimen, or alternatively, hide defects if they overlap. In last year's QNDE, we have presented algorithms to improve defects detectability in severe noise. In this paper, we will present our development of algorithms on defect extraction targeting specifically to mode patterns in SIR images.

  11. Results of the 1989 Self-Image Survey: Catonsville Community College.

    ERIC Educational Resources Information Center

    Turcott, Frances; Linksz, Donna

    Catonsville Community College (CCC) conducted a self-image survey to examine employees' perceptions about the college's instructional and student support programs and the general college environment. The survey was distributed to all full-time faculty, administrators, and classified personnel. It was also distributed to adjunct faculty during the…

  12. Strategic Marketing: The Use of Image Assessment and Marketing Review.

    ERIC Educational Resources Information Center

    Wilhelmi, Charlotte; And Others

    In 1986, Northern Virginia Community College (NVCC) conducted a marketing review to assess the achievement of marketing objectives, identify the most effective marketing activities, assess the community's awareness of and the image of NVCC, assess the perceived quality and appropriateness of the college's programs and services, and formulate…

  13. Using Remote Sensing to Visualize and Extract Building Inventories of Urban Areas for Disaster Planning and Response

    NASA Astrophysics Data System (ADS)

    Lang, A. F.; Salvaggio, C.

    2016-12-01

    Climate change, skyrocketing global population, and increasing urbanization have set the stage for more so-called "mega-disasters." We possess the knowledge to mitigate and predict the scope of these events, and recent advancements in remote sensing can inform these efforts. Satellite and aerial imagery can be obtained anywhere of interest; unmanned aerial systems can be deployed quickly; and improved sensor resolutions and image processing techniques allow close examination of the built environment. Combined, these technologies offer an unprecedented ability for the disaster community to visualize, assess, and communicate risk. Disaster mitigation and response efforts rely on an accurate representation of the built environment, including knowledge of building types, structural characteristics, and juxtapositions to known hazards. The use of remote sensing to extract these inventory data has come far in the last five years. Researchers in the Digital Imaging and Remote Sensing (DIRS) group at the Rochester Institute of Technology are meeting the needs of the disaster community through the development of novel image processing methods capable of extracting detailed information of individual buildings. DIRS researchers have pioneered the ability to generate three-dimensional building models from point cloud imagery (e.g., LiDAR). This method can process an urban area and recreate it in a navigable virtual reality environment such as Google Earth within hours. Detailed geometry is obtained for individual structures (e.g., footprint, elevation). In a recent step forward, these geometric data can now be combined with imagery from other sources, such as high resolution or multispectral imagery. The latter ascribes a spectral signature to individual pixels, suggesting construction material. Ultimately, these individual building data are amassed over an entire region, facilitating aggregation and risk modeling analyses. The downtown region of Rochester, New York is presented as a case study. High resolution optical, LiDAR, and multi-spectral imagery was captured of this region. Using the techniques described, these imagery sources are combined and processed to produce a holistic representation of the built environment, inclusive of individual building characteristics.

  14. Innovative Approaches for the Dissemination of Near Real-time Geostationary Satellite Data for Terrestrial and Space Weather Applications

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; McGrath, K.; Meyer, P. J.; Berndt, E.

    2017-12-01

    A GOES-R series receiving station has been installed at the NASA Marshall Space Flight Center (MSFC) to support GOES-16 transition-to-operations projects of NASA's Earth science program and provide a community portal for GOES-16 data access. This receiving station is comprised of a 6.5-meter dish; motor-driven positioners; Quorum feed and demodulator; and three Linux workstations for ingest, processing, display, and subsequent product generation. The Community Satellite Processing Package (CSPP) is used to process GOES Rebroadcast data from the Advanced Baseline Imager (ABI), Geostationary Lightning Mapper (GLM), Solar Ultraviolet Imager (SUVI), Extreme Ultraviolet and X-ray Irradiance Sensors (EXIS), and Space Environment In-Situ Suite (SEISS) into Level 1b and Level 2 files. GeoTIFFs of the imagery from several of these instruments are ingested into an Esri Arc Enterprise Web Map Service (WMS) server with tiled imagery displayable through a web browser interface or by connecting directly to the WMS with a Geographic Information System software package. These data also drive a basic web interface where users can manually zoom to and animate regions of interest or acquire similar results using a published Application Program Interface. While not as interactive as a WMS-driven interface, this system is much more expeditious with generating and distributing requested imagery. The legacy web capability enacted for the predecessor GOES Imager currently supports approximately 500,000 unique visitors each month. Dissemination capabilities have been refined to support a significantly larger number of anticipated users. The receiving station also supports NASA's Short-term Prediction, Research, and Transition Center's (SPoRT) project activities to dissemination near real-time ABI RGB products to National Weather Service National Centers, including the Satellite Analysis Branch, National Hurricane Center, Ocean Prediction Center, and Weather Prediction Center, where they are displayed in N-AWIPS and AWIPS II. The multitude of additional real-time data users include the U.S. Coast Guard, Federal Aviation Administration, and The Weather Company. A second antenna is being installed for the ingest, processing, and dissemination of GOES-S data.

  15. Single-shot full resolution region-of-interest (ROI) reconstruction in image plane digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Singh, Mandeep; Khare, Kedar

    2018-05-01

    We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.

  16. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  17. Memory preservation made prestigious but easy

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Debus, Christina; Sandhaus, Philipp

    2011-01-01

    Preserving memories combined with story-telling using either photo books for multiple images or high quality products such as one or a few images printed on canvas or images mounted on acryl to create high-quality wall decorations are gradually becoming more popular than classical 4*6 prints and classical silver halide posters. Digital printing via electro photography and ink jet is increasingly replacing classical silver halide technology as the dominant production technology for these kinds of products. Maintaining a consistent and comparable quality of output is becoming more challenging than using silver halide paper for both, prints and posters. This paper describes a unique approach of combining both desktop based software to initiate a compelling project and the use of online capabilities in order to finalize and optimize that project in an online environment in a community process. A comparison of the consumer behavior between online and desktop based solutions for generating photo books will be presented.

  18. NVSIM: UNIX-based thermal imaging system simulator

    NASA Astrophysics Data System (ADS)

    Horger, John D.

    1993-08-01

    For several years the Night Vision and Electronic Sensors Directorate (NVESD) has been using an internally developed forward looking infrared (FLIR) simulation program. In response to interest in the simulation part of these projects by other organizations, NVESD has been working on a new version of the simulation, NVSIM, that will be made generally available to the FLIR using community. NVSIM uses basic FLIR specification data, high resolution thermal input imagery and spatial domain image processing techniques to produce simulated image outputs from a broad variety of FLIRs. It is being built around modular programming techniques to allow simpler addition of more sensor effects. The modularity also allows selective inclusion and exclusion of individual sensor effects at run time. The simulation has been written in the industry standard ANSI C programming language under the widely used UNIX operating system to make it easily portable to a wide variety of computer platforms.

  19. Cloud Computing for radiologists

    PubMed Central

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560

  20. The Practical Application of Uav-Based Photogrammetry Under Economic Aspects

    NASA Astrophysics Data System (ADS)

    Sauerbier, M.; Siegrist, E.; Eisenbeiss, H.; Demir, N.

    2011-09-01

    Nowadays, small size UAVs (Unmanned Aerial Vehicles) have reached a level of practical reliability and functionality that enables this technology to enter the geomatics market as an additional platform for spatial data acquisition. Though one could imagine a wide variety of interesting sensors to be mounted on such a device, here we will focus on photogrammetric applications using digital cameras. In praxis, UAV-based photogrammetry will only be accepted if it a) provides the required accuracy and an additional value and b) if it is competitive in terms of economic application compared to other measurement technologies. While a) was already proven by the scientific community and results were published comprehensively during the last decade, b) still has to be verified under real conditions. For this purpose, a test data set representing a realistic scenario provided by ETH Zurich was used to investigate cost effectiveness and to identify weak points in the processing chain that require further development. Our investigations are limited to UAVs carrying digital consumer cameras, for larger UAVs equipped with medium format cameras the situation has to be considered as significantly different. Image data was acquired during flights using a microdrones MD4-1000 quadrocopter equipped with an Olympus PE-1 digital compact camera. From these images, a subset of 5 images was selected for processing in order to register the effort of time required for the whole production chain of photogrammetric products. We see the potential of mini UAV-based photogrammetry mainly in smaller areas, up to a size of ca. 100 hectares. Larger areas can be efficiently covered by small airplanes with few images, reducing processing effort drastically. In case of smaller areas of a few hectares only, it depends more on the products required. UAVs can be an enhancement or alternative to GNSS measurements, terrestrial laser scanning and ground based photogrammetry. We selected the above mentioned test data from a project featuring an area of interest within the practical range for mini UAVs. While flight planning and flight operation are already quite efficient processes, the bottlenecks identified are mainly related to image processing. Although we used specific software for image processing, the identified gaps in the processing chain today are valid for most commercial photogrammetric software systems on the market. An outlook proposing improvements for a practicable workflow applicable in projects in private economy will be given.

  1. CCCT - NCTN Steering Committees - Clinical Imaging

    Cancer.gov

    The Clinical Imaging Steering Committee serves as a forum for the extramural imaging and oncology communities to provide strategic input to the NCI regarding its significant investment in imaging activities in clinical trials.

  2. Imaging-Assisted Large-Format Breast Pathology: Program Rationale and Development in a Nonprofit Health System in the United States

    PubMed Central

    Tucker, F. Lee

    2012-01-01

    Modern breast imaging, including magnetic resonance imaging, provides an increasingly clear depiction of breast cancer extent, often with suboptimal pathologic confirmation. Pathologic findings guide management decisions, and small increments in reported tumor characteristics may rationalize significant changes in therapy and staging. Pathologic techniques to grossly examine resected breast tissue have changed little during this era of improved breast imaging and still rely primarily on the techniques of gross inspection and specimen palpation. Only limited imaging information is typically conveyed to pathologists, typically in the form of wire-localization images from breast-conserving procedures. Conventional techniques of specimen dissection and section submission destroy the three-dimensional integrity of the breast anatomy and tumor distribution. These traditional methods of breast specimen examination impose unnecessary limitations on correlation with imaging studies, measurement of cancer extent, multifocality, and margin distance. Improvements in pathologic diagnosis, reporting, and correlation of breast cancer characteristics can be achieved by integrating breast imagers into the specimen examination process and the use of large-format sections which preserve local anatomy. This paper describes the successful creation of a large-format pathology program to routinely serve all patients in a busy interdisciplinary breast center associated with a community-based nonprofit health system in the United States. PMID:23316372

  3. 7T MRI subthalamic nucleus atlas for use with 3T MRI.

    PubMed

    Milchenko, Mikhail; Norris, Scott A; Poston, Kathleen; Campbell, Meghan C; Ushe, Mwiza; Perlmutter, Joel S; Snyder, Abraham Z

    2018-01-01

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) reduces motor symptoms in most patients with Parkinson disease (PD), yet may produce untoward effects. Investigation of DBS effects requires accurate localization of the STN, which can be difficult to identify on magnetic resonance images collected with clinically available 3T scanners. The goal of this study is to develop a high-quality STN atlas that can be applied to standard 3T images. We created a high-definition STN atlas derived from seven older participants imaged at 7T. This atlas was nonlinearly registered to a standard template representing 56 patients with PD imaged at 3T. This process required development of methodology for nonlinear multimodal image registration. We demonstrate mm-scale STN localization accuracy by comparison of our 3T atlas with a publicly available 7T atlas. We also demonstrate less agreement with an earlier histological atlas. STN localization error in the 56 patients imaged at 3T was less than 1 mm on average. Our methodology enables accurate STN localization in individuals imaged at 3T. The STN atlas and underlying 3T average template in MNI space are freely available to the research community. The image registration methodology developed in the course of this work may be generally applicable to other datasets.

  4. Visualizing Microbial Biogeochemistry: NanoSIMS and Stable Isotope Probing (Invited)

    NASA Astrophysics Data System (ADS)

    Pett-Ridge, J.; Weber, P. K.

    2009-12-01

    Linking phylogenetic information to function in microbial communities is a key challenge for microbial ecology. Isotope-labeling experiments provide a useful means to investigate the ecophysiology of microbial populations and cells in the environment and allow measurement of nutrient transfers between cell types, symbionts and consortia. The combination of Nano-Secondary Ion Mass Spectrometry (NanoSIMS) analysis, in situ labeling and high resolution microscopy allows isotopic analysis to be linked to phylogeny and morphology and holds great promise for fine-scale studies of microbial systems. In NanoSIMS analysis, samples are sputtered with an energetic primary beam (Cs+, O-) liberating secondary ions that are separated by the mass spectrometer and detected in a suite of electron multipliers. Five isotopic species may be analyzed concurrently with spatial resolution as fine as 50nm. A high sensitivity isotope ratio ‘map’ can then be generated for the analyzed area. NanoSIMS images of 13C, 15N and Mo (a nitrogenase co-factor) localization in diazotrophic cyanobacteria show how cells differentially allocate resources within filaments and allow calculation of nutrient uptake rates on a cell by cell basis. Images of AM fungal hyphae-root and cyanobacteria-rhizobia associations indicate the mobilization and sharing (stealing?) of newly fixed C and N. In a related technique, “El-FISH”, stable isotope labeled biomass is probed with oligonucleotide-elemental labels and then imaged by NanoSIMS. In microbial consortia and cyanobacterial mats, this technique helps link microbial structure and function simultaneously even in systems with unknown and uncultivated microbes. Finally, the combination of re-engineered universal 16S oligonucleotide microarrays with NanoSIMS analyses may allow microbial identity to be linked to functional roles in complex systems such as mats and cellulose degrading hindgut communities. These newly developed methods provide correlated oligonucleotide, functional enzyme and metabolic image data and should help unravel the metabolic processes of complex microbial communities in soils, biofilms and aquatic systems.

  5. Teleophthalmology with optical coherence tomography imaging in community optometry. Evaluation of a quality improvement for macular patients

    PubMed Central

    Kelly, Simon P; Wallwork, Ian; Haider, David; Qureshi, Kashif

    2011-01-01

    Purpose To describe a quality improvement for referral of National Health Service patients with macular disorders from a community optometry setting in an urban area. Methods Service evaluation of teleophthalmology consultation based on spectral domain optical coherence tomography images acquired by the community optometrist and transmitted to hospital eye services. Results Fifty patients with suspected macular conditions were managed via telemedicine consultation over 1 year. Responses were provided by hospital eye service-based ophthalmologists to the community optometrist or patient within the next day in 48 cases (96%) and in 34 (68%) patients on the same day. In the consensus opinion of the optometrist and ophthalmologist, 33 (66%) patients required further “face-to-face” medical examination and were triaged on clinical urgency. Seventeen cases (34%) were managed in the community and are a potential cost improvement. Specialty trainees were supervised in telemedicine consultations. Conclusion Innovation and quality improvement were demonstrated in both optometry to ophthalmology referrals and in primary optometric care by use of telemedicine with spectral domain optical coherence tomography images. E-referral of spectral domain optical coherence tomography images assists triage of macular patients and swifter care of urgent cases. Teleophthalmology is also, in the authors’ opinion, a tool to improve interdisciplinary professional working with community optometrists. Implications for progress are discussed. PMID:22174576

  6. JunoCam: Outreach and Science Opportunities

    NASA Astrophysics Data System (ADS)

    Hansen, Candice; Ingersoll, Andy; Caplinger, Mike; Ravine, Mike; Orton, Glenn

    2014-11-01

    JunoCam is a visible imager on the Juno spacecraft en route to Jupiter. Although the primary role of the camera is for outreach, science objectives will be addressed too. JunoCam is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm. Juno’s elliptical polar orbit will offer unique views of Jupiter’s polar regions with a spatial scale of ~50 km/pixel. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color mages of the aurora will be acquired. Stereo images and images taken with the methane filter will allow us to estimate cloudtop heights. Resolution exceeds that of Cassini about an hour from closest approach and at closest approach images will have a spatial scale of ~3 km/pixel. JunoCam is a push-frame imager on a rotating spacecraft. The use of time-delayed integration takes advantage of the spacecraft spin to build up signal. JunoCam will acquire limb-to-limb views of Jupiter during a spacecraft rotation, and has the possibility of acquiring images of the rings from in-between Jupiter and the inner edge of the rings. Galilean satellite views will be fairly distant but some images will be acquired. Outer irregular satellites and small ring moons Metis and Adrastea will also be imaged. The theme of our outreach is “science in a fish bowl”, with an invitation to the science community and the public to participate. Amateur astronomers will supply their ground-based images for planning, so that we can predict when prominent atmospheric features will be visible. With the aid of professional astronomers observing at infrared wavelengths, we’ll predict when hot spots will be visible to JunoCam. Amateur image processing enthusiasts are onboard to create image products. Many of the earth flyby image products from Juno’s earth gravity assist were processed by amateurs. Between the planning and products will be the decision-making on what images to take when and why. We invite our colleagues to propose science questions for JunoCam to address, and to be part of the participatory process of deciding how to use our resources and scientifically analyze the data.

  7. Dysmorphic concern is related to delusional proneness and negative affect in a community sample.

    PubMed

    Keating, Charlotte; Thomas, Neil; Stephens, Jessie; Castle, David J; Rossell, Susan L

    2016-06-30

    Body image concerns are common in the general population and in some mental illnesses reach pathological levels. We investigated whether dysmorphic concern with appearance (a preoccupation with minor or imagined defects in appearance) is explained by psychotic processes in a community sample. In a cross-sectional design, two hundred and twenty six participants completed an online survey battery including: The Dysmorphic Concern Questionnaire; the Peters Delusional inventory; the Aberrant Salience Inventory; and the Depression, Anxiety, Stress Scale. Participants were native English speakers residing in Australia. Dysmorphic concern was positively correlated with delusional proneness, aberrant salience and negative emotion. Regression established that negative emotion and delusional proneness predicted dysmorphic concern, whereas, aberrant salience did not. Although delusional proneness was related to body dysmorphia, there was no evidence that it was related to aberrant salience. Understanding the contribution of other psychosis processes, and other health related variables to the severity of dysmorphic concern will be a focus of future research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Public Hearing or `Hearing Public'? An Evaluation of the Participation of Local Stakeholders in Environmental Impact Assessment of Ghana's Jubilee Oil Fields

    NASA Astrophysics Data System (ADS)

    Bawole, Justice Nyigmah

    2013-08-01

    This article investigates the involvement of local stakeholders in the environmental impact assessment (EIA) processes of Ghana's first off-shore oil fields (the Jubilee fields). Adopting key informants interviews and documentary reviews, the article argues that the public hearings and the other stakeholder engagement processes were cosmetic and rhetoric with the view to meeting legal requirements rather than a purposeful interest in eliciting inputs from local stakeholders. It further argues that the operators appear to lack the social legitimacy and social license that will make them acceptable in the project communities. A rigorous community engagement along with a commitment to actively involving local stakeholders in the corporate social responsibility (CSR) programmes of the partners may enhance the image of the partners and improve their social legitimacy. Local government agencies should be capacitated to actively engage project organisers; and government must mitigate the impact of the oil projects through well-structured social support programmes.

  9. Public hearing or 'hearing public'? an evaluation of the participation of local stakeholders in environmental impact assessment of Ghana's Jubilee oil fields.

    PubMed

    Bawole, Justice Nyigmah

    2013-08-01

    This article investigates the involvement of local stakeholders in the environmental impact assessment (EIA) processes of Ghana's first off-shore oil fields (the Jubilee fields). Adopting key informants interviews and documentary reviews, the article argues that the public hearings and the other stakeholder engagement processes were cosmetic and rhetoric with the view to meeting legal requirements rather than a purposeful interest in eliciting inputs from local stakeholders. It further argues that the operators appear to lack the social legitimacy and social license that will make them acceptable in the project communities. A rigorous community engagement along with a commitment to actively involving local stakeholders in the corporate social responsibility (CSR) programmes of the partners may enhance the image of the partners and improve their social legitimacy. Local government agencies should be capacitated to actively engage project organisers; and government must mitigate the impact of the oil projects through well-structured social support programmes.

  10. Unification and Enhancement of Planetary Robotic Vision Ground Processing: The EC FP7 Project PRoVisG

    NASA Astrophysics Data System (ADS)

    Paar, G.

    2009-04-01

    At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.

  11. Bring the Poles to Your Classroom & Community Through Linked Hands-on Learning & IPY Data

    NASA Astrophysics Data System (ADS)

    Turrin, M.; Bell, R. E.; Kastens, K. A.; Pfirman, S. L.

    2009-12-01

    Two major legacies of the 4th International Polar Year (IPY 2007-9) are a newly galvanized educational community and an immense volume of polar data collected by the global science community. The tremendous new polar datasets represent a unique opportunity to communicate the nature of the changing poles to student and public audiences through this polar savvy educational community if effective approaches to link data and understanding are employed. We have developed a strategy for polar education that leverages the IPY data resources, linked with the polar education hands-on ‘manipulatives’ (materials that students can manipulate in a dynamic manner). This linked approach leverages the fundamental inquiry based learning but recognizes that particularly in the polar sciences the size of the earth, the remoteness of the poles and the scale of its processes make it difficult for students to explore in a hands-on manner. The linking of polar hands-on ‘manipulatives’ with IPY data provides a bridge between the tangible and the global. Alone manipulative activities can be beneficial in their ability to help students visualize a process or behavior, but without a strong link back to the Earth through data or evidence the understanding of the process is not transferred from the classroom model to the full scale Earth. The use of activities or models is beneficial in connecting the learner to the polar process(es), while the IPY data provides a unique opportunity to ground the polar manipulative experiments in real data. This linked strategy emerged from a series of NSF sponsored IPY Polar Fairs at major science museums that reached in excess of 12,000 people. The design of the fairs was that polar scientists developed activities linking low cost hands-on manipulatives to scientific evidence/data that was displayed in posters, images, and video clips. The participating scientists walked the ‘audience’ through the hands-on manipulative, then discussed their evidence while provided the reasoning. Adjusting this linked manipulative/data approach to the community of teachers will provide a very tangible education outcome to this community from IPY. Our linked manipulative-data strategy ensures polar processes are demonstrated, measured then matched with IPY data sets so that when examined in a guided exploration will provide the student the tools to generate the reasoning. This linked strategy is a powerful way to engage students in Earth science, and provide them with an entry to the wealth of professionally collected data sets that are available from both IPY and the broader science community, all while aligning with National Science Standards. We will demonstrate this approach, and show how the linked manipulative-data approach can be effectively used in community and school events to reach a wider audience.

  12. Practicing chemical process safety: a look at the layers of protection.

    PubMed

    Sanders, Roy E

    2004-11-11

    This presentation will review a few public perceptions of safety in chemical plants and refineries, and will compare these plant workplace risks to some of the more traditional occupations. The central theme of this paper is to provide a "within-the-fence" view of many of the process safety practices that world class plants perform to pro-actively protect people, property, profits as well as the environment. It behooves each chemical plant and refinery to have their story on an image-rich presentation to stress stewardship and process safety. Such a program can assure the company's employees and help convince the community that many layers of safety protection within our plants are effective, and protect all from harm.

  13. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

  14. Image, word, action: interpersonal dynamics in a photo-sharing community.

    PubMed

    Suler, John

    2008-10-01

    In online photo-sharing communities, the individual's expression of self and the relationships that evolve among members is determined by the kinds of images that are shared, by the words exchanged among members, and by interpersonal actions that do not specifically rely on images or text. This article examines the dynamics of personal expression via images in Flickr, including a proposed system for identifying the dimensions of imagistic communication and a discussion of the psychological meanings embedded in a sequence of images. It explores how photographers use text descriptors to supplement their images and how different types of comments on photographs influence interpersonal relationships. The "fav"--when members choose an image as one of their favorites--is examined as one type of action that can serve a variety of interpersonal functions. Although images play a powerful role in the expression of self, it is the integration of images, words, and actions that maximize the development of relationships.

  15. Application of the 4-D XCAT Phantoms in Biomedical Imaging and Beyond.

    PubMed

    Segars, W Paul; Tsui, B M W; Cai, Jing; Yin, Fang-Fang; Fung, George S K; Samei, Ehsan

    2018-03-01

    The four-dimensional (4-D) eXtended CArdiac-Torso (XCAT) series of phantoms was developed to provide accurate computerized models of the human anatomy and physiology. The XCAT series encompasses a vast population of phantoms of varying ages from newborn to adult, each including parameterized models for the cardiac and respiratory motions. With great flexibility in the XCAT's design, any number of body sizes, different anatomies, cardiac or respiratory motions or patterns, patient positions and orientations, and spatial resolutions can be simulated. As such, the XCAT phantoms are gaining a wide use in biomedical imaging research. There they can provide a virtual patient base from which to quantitatively evaluate and improve imaging instrumentation, data acquisition, techniques, and image reconstruction and processing methods which can lead to improved image quality and more accurate clinical diagnoses. The phantoms have also found great use in radiation dosimetry, radiation therapy, medical device design, and even the security and defense industry. This review paper highlights some specific areas in which the XCAT phantoms have found use within biomedical imaging and other fields. From these examples, we illustrate the increasingly important role that computerized phantoms and computer simulation are playing in the research community.

  16. Image motion environments: background noise for movement-based animal signals.

    PubMed

    Peters, Richard; Hemmi, Jan; Zeil, Jochen

    2008-05-01

    Understanding the evolution of animal signals has to include consideration of the structure of signal and noise, and the sensory mechanisms that detect the signals. Considerable progress has been made in understanding sounds and colour signals, however, the degree to which movement-based signals are constrained by the particular patterns of environmental image motion is poorly understood. Here we have quantified the image motion generated by wind-blown plants at 12 sites in the coastal habitat of the Australian lizard Amphibolurus muricatus. Sampling across different plant communities and meteorological conditions revealed distinct image motion environments. At all locations, image motion became more directional and apparent speed increased as wind speeds increased. The magnitude of these changes and the spatial distribution of image motion, however, varied between locations probably as a function of plant structure and the topographic location. In addition, we show that the background motion noise depends strongly on the particular depth-structure of the environment and argue that such micro-habitat differences suggest specific strategies to preserve signal efficacy. Movement-based signals and motion processing mechanisms, therefore, may reveal the same type of habitat specific structural variation that we see for signals from other modalities.

  17. Multifit / Polydefix : a framework for the analysis of polycrystal deformation using X-rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merkel, Sébastien; Hilairet, Nadège

    2015-06-27

    Multifit/Polydefixis an open source IDL software package for the efficient processing of diffraction data obtained in deformation apparatuses at synchrotron beamlines.Multifitallows users to decompose two-dimensional diffraction images into azimuthal slices, fit peak positions, shapes and intensities, and propagate the results to other azimuths and images.Polydefixis for analysis of deformation experiments. Starting from output files created inMultifitor other packages, it will extract elastic lattice strains, evaluate sample pressure and differential stress, and prepare input files for further texture analysis. TheMultifit/Polydefixpackage is designed to make the tedious data analysis of synchrotron-based plasticity, rheology or other time-dependent experiments very straightforward and accessible tomore » a wider community.« less

  18. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  19. High-resolution in-situ thermal imaging of microbial mats at El Tatio Geyser, Chile shows coupling between community color and temperature

    NASA Astrophysics Data System (ADS)

    Dunckel, Anne E.; Cardenas, M. Bayani; Sawyer, Audrey H.; Bennett, Philip C.

    2009-12-01

    Microbial mats have spatially heterogeneous structured communities that manifest visually through vibrant color zonation often associated with environmental gradients. We report the first use of high-resolution thermal infrared imaging to map temperature at four hot springs within the El Tatio Geyser Field, Chile. Thermal images with millimeter resolution show drastic variability and pronounced patterning in temperature, with changes on the order of 30°C within a square decimeter. Paired temperature and visual images show that zones with specific coloration occur within distinct temperature ranges. Unlike previous studies where maximum, minimum, and optimal temperatures for microorganisms are based on isothermally-controlled laboratory cultures, thermal imaging allows for mapping thousands of temperature values in a natural setting. This allows for efficiently constraining natural temperature bounds for visually distinct mat zones. This approach expands current understanding of thermophilic microbial communities and opens doors for detailed analysis of biophysical controls on microbial ecology.

  20. Red, purple and pink: the colors of diffusion on pinterest.

    PubMed

    Bakhshi, Saeideh; Gilbert, Eric

    2015-01-01

    Many lab studies have shown that colors can evoke powerful emotions and impact human behavior. Might these phenomena drive how we act online? A key research challenge for image-sharing communities is uncovering the mechanisms by which content spreads through the community. In this paper, we investigate whether there is link between color and diffusion. Drawing on a corpus of one million images crawled from Pinterest, we find that color significantly impacts the diffusion of images and adoption of content on image sharing communities such as Pinterest, even after partially controlling for network structure and activity. Specifically, Red, Purple and pink seem to promote diffusion, while Green, Blue, Black and Yellow suppress it. To our knowledge, our study is the first to investigate how colors relate to online user behavior. In addition to contributing to the research conversation surrounding diffusion, these findings suggest future work using sophisticated computer vision techniques. We conclude with a discussion on the theoretical, practical and design implications suggested by this work-e.g. design of engaging image filters.

  1. Hyperpolarized xenon-129 production and applications

    NASA Astrophysics Data System (ADS)

    Ruset, Iulian C.

    Hyperpolarized 3He and 129Xe were initially developed and used in the nuclear physics community. Lately they are primarily used in Medical Resonance Imaging (MRI). Although first MRI polarized gas images were acquired using 129Xe, the research community has focused mostly on 3He, due to the well-known polarizing methods and higher polarization numbers achieved. The main purpose of this thesis is to present a novel design of a large-scale SEOP polarizer for producing large quantities of highly polarized 129Xe. High Rb-Xe spin-exchange rates through long-lived van de Waals molecules at low total pressure, implemented in a novel counterflow polarizer design, resulted in xenon polarization as high as 50% for 1.2 liters/hour, with a maximum of 64% for 0.3 l/h. We characterized and improved the polarization process by finding the optimum operating parameters of the polarizer. Two new methods to efficiently use high-power diode lasers are described: a new optical arrangement for a better beam shaping of fiber coupled lasers and the first external-cavity spectrum narrowing of a stack of laser diode arrays. A new accumulation technique for the hyperpolarized xenon was developed and full recovery of polarization after a freeze-thaw cycle was demonstrated for the first time. Two approaches for xenon delivery, frozen and gas states, were developed. Hyperpolarized xenon transportation to Brigham and Women's Hospital was successfully accomplished for collaborative research. First MRI images using hyperpolarized xenon acquired at BWH are presented. Final chapter is focused on describing a low field human MRI scanner using hyperpolarized 3He. We built a human scale imager with open access for orientational studies of the lung functionality. Horizontal and vertical human lung images were acquired as a first stage of this project.

  2. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  3. NEPTUNE Canada-status and planning

    NASA Astrophysics Data System (ADS)

    Bornhold, Brian D.

    2005-04-01

    Stage 1 of the joint Canada-U.S. NEPTUNE seafloor observatory has been funded by the Canada Foundation for Innovation and the British Columbia Knowledge Development Fund with an overall budget of $62.4 million. The network is designed to provide as close to real-time data and images as possible to be distributed to the research community, government agencies, educational institutions and the public via the Internet. Covering much of the northern segment of the Juan de Fuca Plate, this first phase of the NEPTUNE project is scheduled to be installed, with an initial suite of ``community experiments'', in 2008. As part of the planning, NEPTUNE Canada held a series of three workshops to develop the science plans for these ``community experiments'' these experiments have a budget of approximately $13 million. The experiments will cover the gamut of oceanographic science themes including various aspects of: ocean climate and marine productivity, seabed environments and biological communities, fluids at ocean ridges, gas hydrates and fluids on continental margins, plate tectonics processes, associated earthquakes and tsunamis. The next three years will be spent developing and testing the necessary instrumentation for deployment on the network.

  4. Perceived Community Functions and Supportive Residential Environments.

    ERIC Educational Resources Information Center

    Blake, Brian F.

    Results of an illustrative study emphasize the importance of the images that the elderly and the general public have of a rural community's services for senior citizens. These images help to identify ways in which programs and services can be tailored to the requirements of the elderly. Public support for political action that bears directly upon…

  5. Environmental drivers of epibenthic megafauna on a deep temperate continental shelf: A multiscale approach

    NASA Astrophysics Data System (ADS)

    Lacharité, Myriam; Metaxas, Anna

    2018-03-01

    Evaluating the role of abiotic factors in influencing the distribution of deep-water (>75-100 m depth) epibenthic megafaunal communities at mid-to-high latitudes is needed to estimate effects of environmental change, and support marine spatial planning since these factors can be effectively mapped. Given the disparity in scales at which these factors operate, incorporating multiple spatial and temporal scales is necessary. In this study, we determined the relative importance of 3 groups of environmental drivers at different scales (sediment, geomorphology, and oceanography) on epibenthic megafauna on a deep temperate continental shelf in the eastern Gulf of Maine (northwest Atlantic). Twenty benthic photographic transects (range: 611-1021 m; total length surveyed: 18,902 m; 996 images; average of 50 ± 16 images per transect) were performed in July and August 2009 to assess the abundance, composition and diversity of these communities. Surficial geology was assessed using seafloor imagery processed with a novel approach based on computer vision. A bathymetric terrain model (horizontal resolution: 100 m) was used to derive bathymetric variability in the vicinity of transects (1.5, 5 km). Oceanography at the seafloor (temperature, salinity, current speed, current direction) over 10 years (1999-2008) was determined using empirical (World Ocean Database 2013) and modelled data (Finite-Volume Community Ocean Model; 45 vertical layers; horizontal resolution: 1.7-9.5 km). The relative influence of environmental drivers differed between community traits. Abundance was enhanced primarily by swift current speeds, while higher diversity was observed in coarser and more heterogeneous substrates. In both cases, the role of geomorphological features was secondary to these drivers. Environmental variables were poor predictors of change in community composition at the scale of the eastern Gulf of Maine. This study demonstrated the need for explicitly incorporating scales into habitat modelling studies in these regions, and targeting specific drivers for community traits of interest.

  6. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. Conclusions: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.« less

  7. Plasma Treatment to Remove Carbon from Indium UV Filters

    NASA Technical Reports Server (NTRS)

    Greer, Harold F.; Nikzad, Shouleh; Beasley, Matthew; Gantner, Brennan

    2012-01-01

    The sounding rocket experiment FIRE (Far-ultraviolet Imaging Rocket Experiment) will improve the science community fs ability to image a spectral region hitherto unexplored astronomically. The imaging band of FIRE (.900 to 1,100 Angstroms) will help fill the current wavelength imaging observation hole existing from approximately equal to 620 Angstroms to the GALEX band near 1,350 Angstroms. FIRE is a single-optic prime focus telescope with a 1.75-m focal length. The bandpass of 900 to 1100 Angstroms is set by a combination of the mirror coating, the indium filter in front of the detector, and the salt coating on the front of the detector fs microchannel plates. Critical to this is the indium filter that must reduce the flux from Lymanalpha at 1,216 Angstroms by a minimum factor of 10(exp -4). The cost of this Lyman-alpha removal is that the filter is not fully transparent at the desired wavelengths of 900 to 1,100 Angstroms. Recently, in a project to improve the performance of optical and solar blind detectors, JPL developed a plasma process capable of removing carbon contamination from indium metal. In this work, a low-power, low-temperature hydrogen plasma reacts with the carbon contaminants in the indium to form methane, but leaves the indium metal surface undisturbed. This process was recently tested in a proof-of-concept experiment with a filter provided by the University of Colorado. This initial test on a test filter showed improvement in transmission from 7 to 9 percent near 900 with no process optimization applied. Further improvements in this performance were readily achieved to bring the total transmission to 12% with optimization to JPL's existing process.

  8. Passengers on Voyages of Exploration: The Beautiful and Surprising Work Amateurs Can do with Raw Image Data from Planetary Missions

    NASA Astrophysics Data System (ADS)

    Lakdawalla, E. S.

    2008-11-01

    Many recent planetary science missions, including the Mars Exploration Rovers, Cassini-Huygens, and New Horizons, have instituted a policy of the rapid release of ``raw'' images to the Internet within days or even hours of their acquisition. The availability of these data, along with the increasing power of home computers and availability of high-bandwidth Internet connections, have stimulated the development of a worldwide community of armchair planetary scientists, who are able to participate in the everyday drama of exploratory missions' encounters with new worlds and new landscapes. Far from passive onlookers, many of these enthusiasts have taught themselves image processing techniques and have even written software to perform automated processing and mosaicking of these raw data sets. They rapidly produce stunning visualizations and then post them to their own blogs or online forums, where they also engage in discussing scientific observations and inferences about the data sets, broadening missions' public outreach efforts beyond their direct reach. These amateur space scientists feel a deep sense of involvement in and connection to space missions, which makes them enthusiastic (and occasionally demanding) supporters of space exploration.

  9. A simple solution for model comparison in bold imaging: the special case of reward prediction error and reward outcomes.

    PubMed

    Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D

    2013-01-01

    Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.

  10. Will Belly Dancing Be Our Nemesis?

    ERIC Educational Resources Information Center

    Parnell, Dale

    1991-01-01

    Perceives the community college's image as distorted by the provision of hobby and recreation courses. Advocates linkages with other community organizations offering adult and community service programs. Calls for college involvement community development and the solution of urban and suburban problems. (DMM)

  11. Quality Assurance Results for a Commercial Radiosurgery System: A Communication.

    PubMed

    Ruschin, Mark; Lightstone, Alexander; Beachey, David; Wronski, Matt; Babic, Steven; Yeboah, Collins; Lee, Young; Soliman, Hany; Sahgal, Arjun

    2015-10-01

    The purpose of this communication is to inform the radiosurgery community of quality assurance (QA) results requiring attention in a commercial FDA-approved linac-based cone stereo-tactic radiosurgery (SRS) system. Standard published QA guidelines as per the American Association of Physics in Medicine (AAPM) were followed during the SRS system's commissioning process including end-to-end testing, cone concentricity testing, image transfer verification, and documentation. Several software and hardware deficiencies that were deemed risky were uncovered during the process and QA processes were put in place to mitigate these risks during clinical practice. In particular, the present work focuses on daily cone concentricity testing and commissioning-related findings associated with the software. Cone concentricity/alignment is measured daily using both optical light field inspection, as well as quantitative radiation field tests with the electronic portal imager. In 10 out of 36 clini-cal treatments, adjustments to the cone position had to be made to align the cone with the collimator axis to less than 0.5 mm and on two occasions the pre-adjustment measured offset was 1.0 mm. Software-related errors discovered during commissioning included incorrect transfer of the isocentre in DICOM coordinates, improper handling of non-axial image sets, and complex handling of beam data, especially for multi-target treatments. QA processes were established to mitigate the occurrence of the software errors. With proper QA processes, the reported SRS system complies with tolerances set out in established guidelines. Discussions with the vendor are ongoing to address some of the hardware issues related to cone alignment. © The Author(s) 2014.

  12. Behavioral observations of positive and negative valence systems in early childhood predict physiological measures of emotional processing three years later.

    PubMed

    Kessel, Ellen M; Kujawa, Autumn; Goldstein, Brandon; Hajcak, Greg; Bufferd, Sara J; Dyson, Margaret; Klein, Daniel N

    2017-07-01

    The Research Domain Criteria (RDoC) constructs of Positive Valence Systems (PVS) and Negative Valence Systems (NVS) are presumed to manifest behaviorally through early-emerging temperamental negative affectivity (NA) and positive affectivity (PA). The late positive potential (LPP) is a physiological measure of attention towards both negative and positive emotional stimuli; however, its associations with behavioral aspects of PVS and NVS have yet to be examined. In a community sample of children (N = 340), we examined longitudinal relationships between observational measures of temperamental PA and NA assessed at age 6, and the LPP to both pleasant and unpleasant images assessed at age 9. Lower PA at age 6 predicted reduced LPP amplitudes to pleasant, but not unpleasant, images. NA as a composite measure was not related to the LPP, but specific associations were observed with facets of NA: greater fear predicted an enhanced LPP to unpleasant images, whereas greater sadness predicted a reduced LPP to unpleasant images. We were unable to evaluate concurrent associations between behavioral observations of temperament and the LPP, and effect sizes were modest. Results support correspondence between behavioral and physiological measures of emotional processing across development, and provide evidence of discriminant validity in that PA was specifically related to the LPP to pleasant images, while facets of NA were specifically linked to the LPP to unpleasant images. Distinct associations of temperamental sadness and fear with the LPP highlight the importance of further evaluating subconstructs of NVS. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Optimizing technology development and adoption in medical imaging using the principles of innovation diffusion, part II: practical applications.

    PubMed

    Reiner, Bruce I

    2012-02-01

    Successful adoption of new technology development can be accentuated by learning and applying the scientific principles of innovation diffusion. This is of particular importance to areas within the medical imaging practice which have lagged in innovation; perhaps, the most notable of which is reporting which has remained relatively stagnant for over a century. While the theoretical advantages of structured reporting have been well documented throughout the medical imaging community, adoption to date has been tepid and largely relegated to the academic and breast imaging communities. Widespread adoption will likely require an alternative approach to innovation, which addresses the heterogeneity and diversity of the practicing radiologist community along with the ever-changing expectations in service delivery. The challenges and strategies for reporting innovation and adoption are discussed, with the goal of adapting and customizing new technology to the preferences and needs of individual end-users.

  14. WFIRST: Update on the Coronagraph Science Requirements

    NASA Astrophysics Data System (ADS)

    Douglas, Ewan S.; Cahoy, Kerri; Carlton, Ashley; Macintosh, Bruce; Turnbull, Margaret; Kasdin, Jeremy; WFIRST Coronagraph Science Investigation Teams

    2018-01-01

    The WFIRST Coronagraph instrument (CGI) will enable direct imaging and low resolution spectroscopy of exoplanets in reflected light and imaging polarimetry of circumstellar disks. The CGI science investigation teams were tasked with developing a set of science requirements which advance our knowledge of exoplanet occurrence and atmospheric composition, as well as the composition and morphology of exozodiacal debris disks, cold Kuiper Belt analogs, and protoplanetary systems. We present the initial content, rationales, validation, and verification plans for the WFIRST CGI, informed by detailed and still-evolving instrument and observatory performance models. We also discuss our approach to the requirements development and management process, including the collection and organization of science inputs, open source approach to managing the requirements database, and the range of models used for requirements validation. These tools can be applied to requirements development processes for other astrophysical space missions, and may ease their management and maintenance. These WFIRST CGI science requirements allow the community to learn about and provide insights and feedback on the expected instrument performance and science return.

  15. The California Current System

    NASA Image and Video Library

    2017-12-08

    This February 8, 2016 composite image reveals the complex distribution of phytoplankton in one of Earth's eastern boundary upwelling systems — the California Current. Recent work suggests that our warming climate my be increasing the intensity of upwelling in such regions with possible repercussions for the species that comprise those ecosystems. NASA's OceanColor Web is supported by the Ocean Biology Processing Group (OBPG) at NASA's Goddard Space Flight Center. Our responsibilities include the collection, processing, calibration, validation, archive and distribution of ocean-related products from a large number of operational, satellite-based remote-sensing missions providing ocean color, sea surface temperature and sea surface salinity data to the international research community since 1996. Credit: NASA/Goddard/Suomin-NPP/VIIRS NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  16. Pre-Launch Evaluation of the NPP VIIRS Land and Cryosphere EDRs to Meet NASA's Science Requirements

    NASA Technical Reports Server (NTRS)

    Roman, Miguel O.; Justice, Chris; Csiszar, Ivan; Key, Jeffrey R.; Devadiga, Sadashiva; Davidson, carol; Wolfe, Robert; Privette, Jeff

    2011-01-01

    This paper summarizes the NASA Visible Infrared Imaging Radiometer Suite (VIIRS) Land Science team's findings to date with respect to the utility of the VIIRS Land and Cryosphere EDRs to meet NASA's science requirements. Based on previous assessments and results from a recent 51-day global test performed by the Land Product Evaluation and Analysis Tool Element (Land PEATE), the NASA VIIRS Land Science team has determined that, if all the Land and Cryosphere EDRs are to serve the needs of the science community, a number of changes to several products and the Interface Data Processing Segment (IDPS) algorithm processing chain will be needed. In addition, other products will also need to be added to the VIIRS Land product suite to provide continuity for all of the MODIS land data record. As the NASA research program explores new global change research areas, the VIIRS instrument should also provide the polar-orbiting imager data from which new algorithms could be developed, produced, and validated.

  17. Engaging stakeholder communities as body image intervention partners: The Body Project as a case example.

    PubMed

    Becker, Carolyn Black; Perez, Marisol; Kilpela, Lisa Smith; Diedrichs, Phillippa C; Trujillo, Eva; Stice, Eric

    2017-04-01

    Despite recent advances in developing evidence-based psychological interventions, substantial changes are needed in the current system of intervention delivery to impact mental health on a global scale (Kazdin & Blase, 2011). Prevention offers one avenue for reaching large populations because prevention interventions often are amenable to scaling-up strategies, such as task-shifting to lay providers, which further facilitate community stakeholder partnerships. This paper discusses the dissemination and implementation of the Body Project, an evidence-based body image prevention program, across 6 diverse stakeholder partnerships that span academic, non-profit and business sectors at national and international levels. The paper details key elements of the Body Project that facilitated partnership development, dissemination and implementation, including use of community-based participatory research methods and a blended train-the-trainer and task-shifting approach. We observed consistent themes across partnerships, including: sharing decision making with community partners, engaging of community leaders as gatekeepers, emphasizing strengths of community partners, working within the community's structure, optimizing non-traditional and/or private financial resources, placing value on cost-effectiveness and sustainability, marketing the program, and supporting flexibility and creativity in developing strategies for evolution within the community and in research. Ideally, lessons learned with the Body Project can be generalized to implementation of other body image and eating disorder prevention programs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  19. Mass Spectrometry Imaging of Complex Microbial Communities

    PubMed Central

    2016-01-01

    Conspectus In the two decades since mass spectrometry imaging (MSI) was first applied to visualize the distribution of peptides across biological tissues and cells, the technique has become increasingly effective and reliable. MSI excels at providing complementary information to existing methods for molecular analysis—such as genomics, transcriptomics, and metabolomics—and stands apart from other chemical imaging modalities through its capability to generate information that is simultaneously multiplexed and chemically specific. Today a diverse family of MSI approaches are applied throughout the scientific community to study the distribution of proteins, peptides, and small-molecule metabolites across many biological models. The inherent strengths of MSI make the technique valuable for studying microbial systems. Many microbes reside in surface-attached multicellular and multispecies communities, such as biofilms and motile colonies, where they work together to harness surrounding nutrients, fend off hostile organisms, and shield one another from adverse environmental conditions. These processes, as well as many others essential for microbial survival, are mediated through the production and utilization of a diverse assortment of chemicals. Although bacterial cells are generally only a few microns in diameter, the ecologies they influence can encompass entire ecosystems, and the chemical changes that they bring about can occur over time scales ranging from milliseconds to decades. Because of their incredible complexity, our understanding of and influence over microbial systems requires detailed scientific evaluations that yield both chemical and spatial information. MSI is well-positioned to fulfill these requirements. With small adaptations to existing methods, the technique can be applied to study a wide variety of chemical interactions, including those that occur inside single-species microbial communities, between cohabitating microbes, and between microbes and their hosts. In recognition of this potential for scientific advancement, researchers have adapted MSI methodologies for the specific needs of the microbiology research community. As a result, workflows exist for imaging microbial systems with many of the common MSI ionization methods. Despite this progress, there is substantial room for improvements in instrumentation, sample preparation, and data interpretation. This Account provides a brief overview of the state of technology in microbial MSI, illuminates selected applications that demonstrate the potential of the technique, and highlights a series of development challenges that are needed to move the field forward. In the coming years, as microbial MSI becomes easier to use and more universally applicable, the technique will evolve into a fundamental tool widely applied throughout many divisions of science, medicine, and industry. PMID:28001363

  20. Mass Spectrometry Imaging of Complex Microbial Communities.

    PubMed

    Dunham, Sage J B; Ellis, Joseph F; Li, Bin; Sweedler, Jonathan V

    2017-01-17

    In the two decades since mass spectrometry imaging (MSI) was first applied to visualize the distribution of peptides across biological tissues and cells, the technique has become increasingly effective and reliable. MSI excels at providing complementary information to existing methods for molecular analysis-such as genomics, transcriptomics, and metabolomics-and stands apart from other chemical imaging modalities through its capability to generate information that is simultaneously multiplexed and chemically specific. Today a diverse family of MSI approaches are applied throughout the scientific community to study the distribution of proteins, peptides, and small-molecule metabolites across many biological models. The inherent strengths of MSI make the technique valuable for studying microbial systems. Many microbes reside in surface-attached multicellular and multispecies communities, such as biofilms and motile colonies, where they work together to harness surrounding nutrients, fend off hostile organisms, and shield one another from adverse environmental conditions. These processes, as well as many others essential for microbial survival, are mediated through the production and utilization of a diverse assortment of chemicals. Although bacterial cells are generally only a few microns in diameter, the ecologies they influence can encompass entire ecosystems, and the chemical changes that they bring about can occur over time scales ranging from milliseconds to decades. Because of their incredible complexity, our understanding of and influence over microbial systems requires detailed scientific evaluations that yield both chemical and spatial information. MSI is well-positioned to fulfill these requirements. With small adaptations to existing methods, the technique can be applied to study a wide variety of chemical interactions, including those that occur inside single-species microbial communities, between cohabitating microbes, and between microbes and their hosts. In recognition of this potential for scientific advancement, researchers have adapted MSI methodologies for the specific needs of the microbiology research community. As a result, workflows exist for imaging microbial systems with many of the common MSI ionization methods. Despite this progress, there is substantial room for improvements in instrumentation, sample preparation, and data interpretation. This Account provides a brief overview of the state of technology in microbial MSI, illuminates selected applications that demonstrate the potential of the technique, and highlights a series of development challenges that are needed to move the field forward. In the coming years, as microbial MSI becomes easier to use and more universally applicable, the technique will evolve into a fundamental tool widely applied throughout many divisions of science, medicine, and industry.

  1. The Structure and Distribution of Benthic Communities on a Shallow Seamount (Cobb Seamount, Northeast Pacific Ocean)

    PubMed Central

    Curtis, Janelle M. R.; Clarke, M. Elizabeth

    2016-01-01

    Partially owing to their isolation and remote distribution, research on seamounts is still in its infancy, with few comprehensive datasets and empirical evidence supporting or refuting prevailing ecological paradigms. As anthropogenic activity in the high seas increases, so does the need for better understanding of seamount ecosystems and factors that influence the distribution of sensitive benthic communities. This study used quantitative community analyses to detail the structure, diversity, and distribution of benthic mega-epifauna communities on Cobb Seamount, a shallow seamount in the Northeast Pacific Ocean. Underwater vehicles were used to visually survey the benthos and seafloor in ~1600 images (~5 m2 in size) between 34 and 1154 m depth. The analyses of 74 taxa from 11 phyla resulted in the identification of nine communities. Each community was typified by taxa considered to provide biological structure and/or be a primary producer. The majority of the community-defining taxa were either cold-water corals, sponges, or algae. Communities were generally distributed as bands encircling the seamount, and depth was consistently shown to be the strongest environmental proxy of the community-structuring processes. The remaining variability in community structure was partially explained by substrate type, rugosity, and slope. The study used environmental metrics, derived from ship-based multibeam bathymetry, to model the distribution of communities on the seamount. This model was successfully applied to map the distribution of communities on a 220 km2 region of Cobb Seamount. The results of the study support the paradigms that seamounts are diversity 'hotspots', that the majority of seamount communities are at risk to disturbance from bottom fishing, and that seamounts are refugia for biota, while refuting the idea that seamounts have high endemism. PMID:27792782

  2. VisPort: Web-Based Access to Community-Specific Visualization Functionality [Shedding New Light on Exploding Stars: Visualization for TeraScale Simulation of Neutrino-Driven Supernovae (Final Technical Report)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, M Pauline

    2007-06-30

    The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and productsmore » of visualization.« less

  3. Understanding community-based processes for research ethics review: a national study.

    PubMed

    Shore, Nancy; Brazauskas, Ruta; Drew, Elaine; Wong, Kristine A; Moy, Lisa; Baden, Andrea Corage; Cyr, Kirsten; Ulevicus, Jocelyn; Seifer, Sarena D

    2011-12-01

    Institutional review boards (IRBs), designed to protect individual study participants, do not routinely assess community consent, risks, and benefits. Community groups are establishing ethics review processes to determine whether and how research is conducted in their communities. To strengthen the ethics review of community-engaged research, we sought to identify and describe these processes. In 2008 we conducted an online survey of US-based community groups and community-institutional partnerships involved in human-participants research. We identified 109 respondents who met participation criteria and had ethics review processes in place. The respondents' processes mainly functioned through community-institutional partnerships, community-based organizations, community health centers, and tribal organizations. These processes had been created primarily to ensure that the involved communities were engaged in and directly benefited from research and were protected from research harms. The primary process benefits included giving communities a voice in determining which studies were conducted and ensuring that studies were relevant and feasible, and that they built community capacity. The primary process challenges were the time and resources needed to support the process. Community-based processes for ethics review consider community-level ethical issues that institution-based IRBs often do not.

  4. Assessing the Agreement Between Eo-Based Semi-Automated Landslide Maps with Fuzzy Manual Landslide Delineation

    NASA Astrophysics Data System (ADS)

    Albrecht, F.; Hölbling, D.; Friedl, B.

    2017-09-01

    Landslide mapping benefits from the ever increasing availability of Earth Observation (EO) data resulting from programmes like the Copernicus Sentinel missions and improved infrastructure for data access. However, there arises the need for improved automated landslide information extraction processes from EO data while the dominant method is still manual delineation. Object-based image analysis (OBIA) provides the means for the fast and efficient extraction of landslide information. To prove its quality, automated results are often compared to manually delineated landslide maps. Although there is awareness of the uncertainties inherent in manual delineations, there is a lack of understanding how they affect the levels of agreement in a direct comparison of OBIA-derived landslide maps and manually derived landslide maps. In order to provide an improved reference, we present a fuzzy approach for the manual delineation of landslides on optical satellite images, thereby making the inherent uncertainties of the delineation explicit. The fuzzy manual delineation and the OBIA classification are compared by accuracy metrics accepted in the remote sensing community. We have tested this approach for high resolution (HR) satellite images of three large landslides in Austria and Italy. We were able to show that the deviation of the OBIA result from the manual delineation can mainly be attributed to the uncertainty inherent in the manual delineation process, a relevant issue for the design of validation processes for OBIA-derived landslide maps.

  5. SELF IMAGES AND COMMUNITY IMAGES OF THE ELEMENTARY SCHOOL PRINCIPAL--FINDINGS AND IMPLICATIONS OF A SOCIOLOGICAL INQUIRY.

    ERIC Educational Resources Information Center

    FOSKETT, JOHN M.; WOLCOTT, HARRY F.

    THE SYSTEM OF RULES THAT GUIDES THE BEHAVIOR OF ELEMENTARY SCHOOL PRINCIPALS WAS INVESTIGATED. THIS BODY OF RULES, TERMED "THE NORMATIVE STRUCTURE OF THE COMMUNITY AS IT PERTAINS TO SCHOOL ADMINISTRATORS," WAS STUDIED BY MEANS OF AN INSTRUMENT CALLED THE "ROLE NORM INVENTORY." SEPARATE INVENTORIES WERE DEVELOPED FOR ELEMENTARY…

  6. HiRISE: The People's Camera

    NASA Astrophysics Data System (ADS)

    McEwen, A. S.; Eliason, E.; Gulick, V. C.; Spinoza, Y.; Beyer, R. A.; HiRISE Team

    2010-12-01

    The High Resolution Imaging Science Experiment (HiRISE) camera, orbiting Mars since 2006 on the Mars Reconnaissance Orbiter (MRO), has returned more than 17,000 large images with scales as small as 25 cm/pixel. From it’s beginning, the HiRISE team has followed “The People’s Camera” concept, with rapid release of useful images, explanations, and tools, and facilitating public image suggestions. The camera includes 14 CCDs, each read out into 2 data channels, so compressed images are returned from MRO as 28 long (up to 120,000 line) images that are 1024 pixels wide (or binned 2x2 to 512 pixels, etc.). This raw data is very difficult to use, especially for the public. At the HiRISE operations center the raw data are calibrated and processed into a series of B&W and color products, including browse images and JPEG2000-compressed images and tools to make it easy for everyone to explore these enormous images (see http://hirise.lpl.arizona.edu/). Automated pipelines do all of this processing, so we can keep up with the high data rate; images go directly to the format of the Planetary Data System (PDS). After students visually check each image product for errors, they are fully released just 1 month after receipt; captioned images (written by science team members) may be released sooner. These processed HiRISE images have been incorporated into tools such as Google Mars and World Wide Telescope for even greater accessibility. 51 Digital Terrain Models derived from HiRISE stereo pairs have been released, resulting in some spectacular flyover movies produced by members of the public and viewed up to 50,000 times according to YouTube. Public targeting began in 2007 via NASA Quest (http://marsoweb.nas.nasa.gov/HiRISE/quest/) and more than 200 images have been acquired, mostly by students and educators. At the beginning of 2010 we released HiWish (http://www.uahirise.org/hiwish/), opening HiRISE targeting to anyone in the world with Internet access, and already more than 100 public suggestions have been acquired. HiRISE has proven very popular with the public and science community. For example, a Google search on “HiRISE Mars” returns 626,000 results. We've participated in well over a two dozen presentations, specifically talking to middle and high-schoolers about HiRISE. Our images and captions have been featured in high-quality print magazines such as "National Geographic, Ciel et Espace, and Sky and Telescope.

  7. A Freeware Path to Neutron Computed Tomography

    NASA Astrophysics Data System (ADS)

    Schillinger, Burkhard; Craft, Aaron E.

    Neutron computed tomography has become a routine method at many neutron sources due to the availability of digital detection systems, powerful computers and advanced software. The commercial packages Octopus by Inside Matters and VGStudio by Volume Graphics have been established as a quasi-standard for high-end computed tomography. However, these packages require a stiff investment and are available to the users only on-site at the imaging facility to do their data processing. There is a demand from users to have image processing software at home to do further data processing; in addition, neutron computed tomography is now being introduced even at smaller and older reactors. Operators need to show a first working tomography setup before they can obtain a budget to build an advanced tomography system. Several packages are available on the web for free; however, these have been developed for X-rays or synchrotron radiation and are not immediately useable for neutron computed tomography. Three reconstruction packages and three 3D-viewers have been identified and used even for Gigabyte datasets. This paper is not a scientific publication in the classic sense, but is intended as a review to provide searchable help to make the described packages usable for the tomography community. It presents the necessary additional preprocessing in ImageJ, some workarounds for bugs in the software, and undocumented or badly documented parameters that need to be adapted for neutron computed tomography. The result is a slightly complicated, but surprisingly high-quality path to neutron computed tomography images in 3D, but not a replacement for the even more powerful commercial software mentioned above.

  8. Data publication with the structural biology data grid supports live analysis

    DOE PAGES

    Meyer, Peter A.; Socias, Stephanie; Key, Jason; ...

    2016-03-07

    Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of themore » original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. In conclusion, it is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis.« less

  9. Data publication with the structural biology data grid supports live analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Peter A.; Socias, Stephanie; Key, Jason

    Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of themore » original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. In conclusion, it is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis.« less

  10. Detection and identification of benthic communities and shoreline features in Biscayne Bay

    NASA Technical Reports Server (NTRS)

    Kolipinski, M. C.; Higer, A. L.

    1970-01-01

    Progress made in the development of a technique for identifying and delinating benthic and shoreline communities using multispectral imagery is described. Images were collected with a multispectral scanner system mounted in a C-47 aircraft. Concurrent with the overflight, ecological ground- and sea-truth information was collected at 19 sites in the bay and on the shore. Preliminary processing of the scanner imagery with a CDC 1604 digital computer provided the optimum channels for discernment among different underwater and coastal objects. Automatic mapping of the benthic plants by multiband imagery and the mapping of isotherms and hydrodynamic parameters by digital model can become an effective predictive ecological tool when coupled together. Using the two systems, it appears possible to predict conditions that could adversely affect the benthic communities. With the advent of the ERTS satellites and space platforms, imagery data could be obtained which, when used in conjunction with water-level and meteorological data, would provide for continuous ecological monitoring.

  11. Data publication with the structural biology data grid supports live analysis.

    PubMed

    Meyer, Peter A; Socias, Stephanie; Key, Jason; Ransey, Elizabeth; Tjon, Emily C; Buschiazzo, Alejandro; Lei, Ming; Botka, Chris; Withrow, James; Neau, David; Rajashankar, Kanagalaghatta; Anderson, Karen S; Baxter, Richard H; Blacklow, Stephen C; Boggon, Titus J; Bonvin, Alexandre M J J; Borek, Dominika; Brett, Tom J; Caflisch, Amedeo; Chang, Chung-I; Chazin, Walter J; Corbett, Kevin D; Cosgrove, Michael S; Crosson, Sean; Dhe-Paganon, Sirano; Di Cera, Enrico; Drennan, Catherine L; Eck, Michael J; Eichman, Brandt F; Fan, Qing R; Ferré-D'Amaré, Adrian R; Fromme, J Christopher; Garcia, K Christopher; Gaudet, Rachelle; Gong, Peng; Harrison, Stephen C; Heldwein, Ekaterina E; Jia, Zongchao; Keenan, Robert J; Kruse, Andrew C; Kvansakul, Marc; McLellan, Jason S; Modis, Yorgo; Nam, Yunsun; Otwinowski, Zbyszek; Pai, Emil F; Pereira, Pedro José Barbosa; Petosa, Carlo; Raman, C S; Rapoport, Tom A; Roll-Mecak, Antonina; Rosen, Michael K; Rudenko, Gabby; Schlessinger, Joseph; Schwartz, Thomas U; Shamoo, Yousif; Sondermann, Holger; Tao, Yizhi J; Tolia, Niraj H; Tsodikov, Oleg V; Westover, Kenneth D; Wu, Hao; Foster, Ian; Fraser, James S; Maia, Filipe R N C; Gonen, Tamir; Kirchhausen, Tom; Diederichs, Kay; Crosas, Mercè; Sliz, Piotr

    2016-03-07

    Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of the original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. It is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis.

  12. Data publication with the structural biology data grid supports live analysis

    PubMed Central

    Meyer, Peter A.; Socias, Stephanie; Key, Jason; Ransey, Elizabeth; Tjon, Emily C.; Buschiazzo, Alejandro; Lei, Ming; Botka, Chris; Withrow, James; Neau, David; Rajashankar, Kanagalaghatta; Anderson, Karen S.; Baxter, Richard H.; Blacklow, Stephen C.; Boggon, Titus J.; Bonvin, Alexandre M. J. J.; Borek, Dominika; Brett, Tom J.; Caflisch, Amedeo; Chang, Chung-I; Chazin, Walter J.; Corbett, Kevin D.; Cosgrove, Michael S.; Crosson, Sean; Dhe-Paganon, Sirano; Di Cera, Enrico; Drennan, Catherine L.; Eck, Michael J.; Eichman, Brandt F.; Fan, Qing R.; Ferré-D'Amaré, Adrian R.; Christopher Fromme, J.; Garcia, K. Christopher; Gaudet, Rachelle; Gong, Peng; Harrison, Stephen C.; Heldwein, Ekaterina E.; Jia, Zongchao; Keenan, Robert J.; Kruse, Andrew C.; Kvansakul, Marc; McLellan, Jason S.; Modis, Yorgo; Nam, Yunsun; Otwinowski, Zbyszek; Pai, Emil F.; Pereira, Pedro José Barbosa; Petosa, Carlo; Raman, C. S.; Rapoport, Tom A.; Roll-Mecak, Antonina; Rosen, Michael K.; Rudenko, Gabby; Schlessinger, Joseph; Schwartz, Thomas U.; Shamoo, Yousif; Sondermann, Holger; Tao, Yizhi J.; Tolia, Niraj H.; Tsodikov, Oleg V.; Westover, Kenneth D.; Wu, Hao; Foster, Ian; Fraser, James S.; Maia, Filipe R. N C.; Gonen, Tamir; Kirchhausen, Tom; Diederichs, Kay; Crosas, Mercè; Sliz, Piotr

    2016-01-01

    Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of the original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. It is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis. PMID:26947396

  13. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  14. An open-source, FireWire camera-based, Labview-controlled image acquisition system for automated, dynamic pupillometry and blink detection.

    PubMed

    de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio

    2013-12-01

    The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Cosmic Origins (COR) Technology Development Program Overview

    NASA Astrophysics Data System (ADS)

    Werneth, Russell; Pham, B.; Clampin, M.

    2014-01-01

    The Cosmic Origins (COR) Program Office was established in FY11 and resides at the NASA Goddard Space Flight Center (GSFC). The office serves as the implementation arm for the Astrophysics Division at NASA Headquarters for COR Program related matters. We present an overview of the Program’s technology management activities and the Program’s technology development portfolio. We discuss the process for addressing community-provided technology needs and the Technology Management Board (TMB)-vetted prioritization and investment recommendations. This process improves the transparency and relevance of technology investments, provides the community a voice in the process, and leverages the technology investments of external organizations by defining a need and a customer. Goals for the COR Program envisioned by the National Research Council’s (NRC) “New Worlds, New Horizons in Astronomy and Astrophysics” (NWNH) Decadal Survey report includes a 4m-class UV/optical telescope that would conduct imaging and spectroscopy as a post-Hubble observatory with significantly improved sensitivity and capability, a near-term investigation of NASA participation in the Japanese Aerospace Exploration Agency/Institute of Space and Astronautical Science (JAXA/ISAS) Space Infrared Telescope for Cosmology and Astrophysics (SPICA) mission, and future Explorers.

  16. Space Radar Image of Kilauea, Hawaii - interferometry 1

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This X-band image of the volcano Kilauea was taken on October 4, 1994, by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar. The area shown is about 9 kilometers by 13 kilometers (5.5 miles by 8 miles) and is centered at about 19.58 degrees north latitude and 155.55 degrees west longitude. This image and a similar image taken during the first flight of the radar instrument on April 13, 1994 were combined to produce the topographic information by means of an interferometric process. This is a process by which radar data acquired on different passes of the space shuttle is overlaid to obtain elevation information. Three additional images are provided showing an overlay of radar data with interferometric fringes; a three-dimensional image based on altitude lines; and, finally, a topographic view of the region. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR. The Instituto Ricerca Elettromagnetismo Componenti Elettronici (IRECE) at the University of Naples was a partner in interferometry analysis.

  17. The Advanced Rapid Imaging and Analysis (ARIA) Project: Status of SAR products for Earthquakes, Floods, Volcanoes and Groundwater-related Subsidence

    NASA Astrophysics Data System (ADS)

    Owen, S. E.; Yun, S. H.; Hua, H.; Agram, P. S.; Liu, Z.; Sacco, G. F.; Manipon, G.; Linick, J. P.; Fielding, E. J.; Lundgren, P.; Farr, T. G.; Webb, F.; Rosen, P. A.; Simons, M.

    2017-12-01

    The Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards is focused on rapidly generating high-level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. Space-based geodetic measurement techniques including Interferometric Synthetic Aperture Radar (InSAR), differential Global Positioning System, and SAR-based change detection have become critical additions to our toolset for understanding and mapping the damage and deformation caused by earthquakes, volcanic eruptions, floods, landslides, and groundwater extraction. Up until recently, processing of these data sets has been handcrafted for each study or event and has not generated products rapidly and reliably enough for response to natural disasters or for timely analysis of large data sets. The ARIA project, a joint venture co-sponsored by the California Institute of Technology and by NASA through the Jet Propulsion Laboratory, has been capturing the knowledge applied to these responses and building it into an automated infrastructure to generate imaging products in near real-time that can improve situational awareness for disaster response. In addition to supporting the growing science and hazard response communities, the ARIA project has developed the capabilities to provide automated imaging and analysis capabilities necessary to keep up with the influx of raw SAR data from geodetic imaging missions such as ESA's Sentinel-1A/B, now operating with repeat intervals as short as 6 days, and the upcoming NASA NISAR mission. We will present the progress and results we have made on automating the analysis of Sentinel-1A/B SAR data for hazard monitoring and response, with emphasis on recent developments and end user engagement in flood extent mapping and deformation time series for both volcano monitoring and mapping of groundwater-related subsidence

  18. Monitoring Corals and Submerged Aquatic Vegetation in Western Pacific Using Satellite Remote Sensing Integrated with Field Data

    NASA Astrophysics Data System (ADS)

    Roelfsema, C. M.; Phinn, S. R.; Lyons, M. B.; Kovacs, E.; Saunders, M. I.; Leon, J. X.

    2013-12-01

    Corals and Submerged Aquatic Vegetation (SAV) are typically found in highly dynamic environments where the magnitude and types of physical and biological processes controlling their distribution, diversity and function changes dramatically. Recent advances in the types of satellite image data and the length of their archives that are available globally, coupled with new techniques for extracting environmental information from these data sets has enabled significant advances to be made in our ability to map and monitor coral and SAV environments. Object Based Image Analysis techniques are one of the most significant advances in information extraction techniques for processing images to deliver environmental information at multiple spatial scales. This poster demonstrates OBIA applied to high spatial resolution satellite image data to map and monitor coral and SAV communities across a variety of environments in the Western Pacific that vary in their extent, biological composition, forcing physical factors and location. High spatial resolution satellite imagery (Quickbird, Ikonos and Worldview2) were acquired coincident with field surveys on each reef to collect georeferenced benthic photo transects, over various areas in the Western Pacific. Base line maps were created, from Roviana Lagoon Solomon island (600 km2), Bikini Atoll Marshall Island (800 Km2), Lizard Island, Australia (30 km2) and time series maps for geomorphic and benthic communities were collected for Heron Reef, Australia (24 km2) and Eastern Banks area of Moreton Bay, Australia (200 km2). The satellite image data were corrected for radiometric and atmospheric distortions to at-surface reflectance. Georeferenced benthic photos were acquired by divers or Autonomous Underwater Vehicles, analysed for benthic cover composition, and used for calibration and validation purposes. Hierarchical mapping from: reef/non-reef (1000's - 10000's m); reef type (100's - 1000's m); 'geomorphic zone' (10's - 100's m); to dominant components of benthic cover compositions (1 - 10's m); and individual benthic cover type scale (0.5-5.0's m), was completed using object based segmentation and semi-automated labelling through membership rules. Accuracy assessment of the satellite image based maps and field data sets scales maps produced with 90% maximum accuracy larger scales and less complex maps, versus 40 % at smaller scale and complex maps. The study showed that current data sets and object based analysis are able to reliable map at various scales and level of complexity covering a variety of extent and environments at various times; as a result science and management can use these tools to assess and understand the ecological processes taking place in coral and SAV environments.

  19. High-Resolution Photo-Mosaicing of the Rosebud Hydrothermal Vent Site and Surrounding Lava Flows, Galapagos Rift 86W: Techniques and Interpretations

    NASA Astrophysics Data System (ADS)

    Rzhanov, Y.; Mayer, L.; Fornari, D.; Shank, T.; Humphris, S.; Scheirer, D.; Kinsey, J.; Whitcomb, L.

    2003-12-01

    The Rosebud hydrothermal vent field was discovered in May 2002 in the Galapagos Rift near 86W during a series of Alvin dives and ABE autonomous vehicle surveys. Vertical-incidence digital imaging using a 3.1 Mpixel digital camera and strobe illumination from altitudes of 3-5m was carried out during the Alvin dives. A complete survey of the Rosebud vent site was carried out on Alvin Dive 3790. Submersible position was determined by post-cruise integration of 1.2 MHz bottom-lock Doppler sonar velocity data logged at 5Hz, integrated with heading and attitude data from a north-seeking fiber-optic gyroscope logged at 10Hz, and initialized with a surveyed-in long-baseline transponder navigation system providing geodetic position fixes at 15s intervals. The photo-mosaicing process consisted of three main stages: pre-processing, pair-wise image co-registration, and global alignment. Excellent image quality allowed us to avoid lens distortion correction, so images only underwent histogram equalization. Pair-wise co-registration of sequential frames was done partially automatically (where overlap exceeded 70 percent we employed a frequency-domain based technique), and partially manually (when overlap did not exceed 15 percent and manual feature extraction was the only way to find transformations relating the frames). Partial mosaics allowed us to determine which non-sequential frames had substantial overlap, and the corresponding transformations were found via feature extraction. Global alignment of the images consisted of construction of a sparse, nonlinear over-constrained system of equations reflecting positions of the frames in real-world coordinates. This system was solved using least squares, and the solution provided globally optimal positions of the frames in the overall mosaic. Over 700 images were mosaiced resulting in resolution of ~3 mm per pixel. The mosaiced area covers approximately 50 m x 60 m and clearly shows several biological zonations and distribution of lava flow morphologies, including what is interpreted as the contact between older lobate lava and the young sheet flow that hosts Rosebud vent communities. Recruitment of tubeworms, mussels, and clams is actively occurring at more than five locations oriented on a NE-SW trend where vent emissions occur through small cracks in the sheet flow. Large-scale views of seafloor hydrothermal vent sites, such as the one produced for Rosebud, are critical to properly understanding spatial relationships between hydrothermal biological communities, sites of focused and diffuse fluid flow, and the complex array of volcanic and tectonic features at mid-ocean ridge crests. These high-resolution perspectives are also critical to time-series studies where quantitative documentation of changes can be related to variations in hydrothermal, magmatic and tectonic processes.

  20. Will Belly Dancing Be Our Nemesis?

    ERIC Educational Resources Information Center

    Parnell, Dale

    1982-01-01

    Points out the degree to which the community college's image is distorted by the provision of hobby and recreation classes. Advocates linkages with other community organizations offering adult and community service programs. Calls for college involvement in community development and the solution of urban and suburban problems. (DMM)

  1. Cultural Diversity in Rural Communities.

    ERIC Educational Resources Information Center

    Castania, Kathy

    1992-01-01

    As rural communities become more culturally diverse, the institutions and organizations that serve them must assist this cultural transition by providing a framework for change. Such a framework includes a vision of healthy diverse communities that are conscious of changing demographics and willing to reevaluate community self-image. Three…

  2. High Contrast Imaging of Exoplanets and Exoplanetary Systems with JWST

    NASA Astrophysics Data System (ADS)

    Hinkley, Sasha; Skemer, Andrew; Biller, Beth; Baraffe, I.; Bonnefoy, M.; Bowler, B.; Carter, A.; Chen, C.; Choquet, E.; Currie, T.; Danielski, C.; Fortney, J.; Grady, C.; Greenbaum, A.; Hines, D.; Janson, M.; Kalas, P.; Kennedy, G.; Kraus, A.; Lagrange, A.; Liu, M.; Marley, M.; Marois, C.; Matthews, B.; Mawet, D.; Metchev, S.; Meyer, M.; Millar-Blanchaer, M.; Perrin, M.; Pueyo, L.; Quanz, S.; Rameau, J.; Rodigas, T.; Sallum, S.; Sargent, B.; Schlieder, J.; Schneider, G.; Stapelfeldt, K.; Tremblin, P.; Vigan, A.; Ygouf, M.

    2017-11-01

    JWST will transform our ability to characterize directly imaged planets and circumstellar debris disks, including the first spectroscopic characterization of directly imaged exoplanets at wavelengths beyond 5 microns, providing a powerful diagnostic of cloud particle properties, atmospheric structure, and composition. To lay the groundwork for these science goals, we propose a 39-hour ERS program to rapidly establish optimal strategies for JWST high contrast imaging. We will acquire: a) coronagraphic imaging of a newly discovered exoplanet companion, and a well-studied circumstellar debris disk with NIRCam & MIRI; b) spectroscopy of a wide separation planetary mass companion with NIRSPEC & MIRI; and c) deep aperture masking interferometry with NIRISS. Our primary goals are to: 1) generate representative datasets in modes to be commonly used by the exoplanet and disk imaging communities; 2) deliver science enabling products to empower a broad user base to develop successful future investigations; and 3) carry out breakthrough science by characterizing exoplanets for the first time over their full spectral range from 2-28 microns, and debris disk spectrophotometry out to 15 microns sampling the 3 micron water ice feature. Our team represents the majority of the community dedicated to exoplanet and disk imaging and has decades of experience with high contrast imaging algorithms and pipelines. We have developed a collaboration management plan and several organized working groups to ensure we can rapidly and effectively deliver high quality Science Enabling Products to the community.

  3. Red, Purple and Pink: The Colors of Diffusion on Pinterest

    PubMed Central

    Bakhshi, Saeideh; Gilbert, Eric

    2015-01-01

    Many lab studies have shown that colors can evoke powerful emotions and impact human behavior. Might these phenomena drive how we act online? A key research challenge for image-sharing communities is uncovering the mechanisms by which content spreads through the community. In this paper, we investigate whether there is link between color and diffusion. Drawing on a corpus of one million images crawled from Pinterest, we find that color significantly impacts the diffusion of images and adoption of content on image sharing communities such as Pinterest, even after partially controlling for network structure and activity. Specifically, Red, Purple and pink seem to promote diffusion, while Green, Blue, Black and Yellow suppress it. To our knowledge, our study is the first to investigate how colors relate to online user behavior. In addition to contributing to the research conversation surrounding diffusion, these findings suggest future work using sophisticated computer vision techniques. We conclude with a discussion on the theoretical, practical and design implications suggested by this work—e.g. design of engaging image filters. PMID:25658423

  4. Towards real-time diffuse optical tomography for imaging brain functions cooperated with Kalman estimator

    NASA Astrophysics Data System (ADS)

    Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng

    2018-02-01

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.

  5. Development of a Medical Cyclotron Production Facility

    NASA Astrophysics Data System (ADS)

    Allen, Danny R.

    2003-08-01

    Development of a Cyclotron manufacturing facility begins with a business plan. Geographics, the size and activity of the medical community, the growth potential of the modality being served, and other business connections are all considered. This business used the customer base established by NuTech, Inc., an independent centralized nuclear pharmacy founded by Danny Allen. With two pharmacies in operation in Tyler and College Station and a customer base of 47 hospitals and clinics the existing delivery system and pharmacist staff is used for the cyclotron facility. We then added cyclotron products to contracts with these customers to guarantee a supply. We partnered with a company in the process of developing PET imaging centers. We then built an independent imaging center attached to the cyclotron facility to allow for the use of short-lived isotopes.

  6. Analyses of outcrop and sediment grains observed and collected from the Sirena Deep and Middle Pond of the Mariana Trench

    NASA Astrophysics Data System (ADS)

    Hand, K. P.; Bartlett, D. H.; Fryer, P.

    2012-12-01

    During a March 2012 expedition we recovered sediments from two locales within the Marina Trench - Middle Pond and Sirena Deep. Samples were recovered from a Niskin bottle deployed on a passive lander platform that released an arm after touching down on the seafloor. The impact of the arm holding the Niskin bottle caused sediments to enter the bottle; this process was seen in images and on video captured by the lander. The combination of imagery and preliminary analyses of the sediments indicates that the Sirena Deep locale is a region of serpentinization and active microbial communities. Images show several outcrops consistent with serpentinization, some of which are coated with filamentous microbial mats. Results and analyses of these samples will be presented.

  7. InSAR Scientific Computing Environment

    NASA Technical Reports Server (NTRS)

    Rosen, Paul A.; Sacco, Gian Franco; Gurrola, Eric M.; Zabker, Howard A.

    2011-01-01

    This computing environment is the next generation of geodetic image processing technology for repeat-pass Interferometric Synthetic Aperture (InSAR) sensors, identified by the community as a needed capability to provide flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products. This software allows users of interferometric radar data the flexibility to process from Level 0 to Level 4 products using a variety of algorithms and for a range of available sensors. There are many radar satellites in orbit today delivering to the science community data of unprecedented quantity and quality, making possible large-scale studies in climate research, natural hazards, and the Earth's ecosystem. The proposed DESDynI mission, now under consideration by NASA for launch later in this decade, would provide time series and multiimage measurements that permit 4D models of Earth surface processes so that, for example, climate-induced changes over time would become apparent and quantifiable. This advanced data processing technology, applied to a global data set such as from the proposed DESDynI mission, enables a new class of analyses at time and spatial scales unavailable using current approaches. This software implements an accurate, extensible, and modular processing system designed to realize the full potential of InSAR data from future missions such as the proposed DESDynI, existing radar satellite data, as well as data from the NASA UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar), and other airborne platforms. The processing approach has been re-thought in order to enable multi-scene analysis by adding new algorithms and data interfaces, to permit user-reconfigurable operation and extensibility, and to capitalize on codes already developed by NASA and the science community. The framework incorporates modern programming methods based on recent research, including object-oriented scripts controlling legacy and new codes, abstraction and generalization of the data model for efficient manipulation of objects among modules, and well-designed module interfaces suitable for command- line execution or GUI-programming. The framework is designed to allow users contributions to promote maximum utility and sophistication of the code, creating an open-source community that could extend the framework into the indefinite future.

  8. MCR Container Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P

    MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less

  9. Gold nanoparticle contrast agents in advanced X-ray imaging technologies.

    PubMed

    Ahn, Sungsook; Jung, Sung Yong; Lee, Sang Joon

    2013-05-17

    Recently, there has been significant progress in the field of soft- and hard-X-ray imaging for a wide range of applications, both technically and scientifically, via developments in sources, optics and imaging methodologies. While one community is pursuing extensive applications of available X-ray tools, others are investigating improvements in techniques, including new optics, higher spatial resolutions and brighter compact sources. For increased image quality and more exquisite investigation on characteristic biological phenomena, contrast agents have been employed extensively in imaging technologies. Heavy metal nanoparticles are excellent absorbers of X-rays and can offer excellent improvements in medical diagnosis and X-ray imaging. In this context, the role of gold (Au) is important for advanced X-ray imaging applications. Au has a long-history in a wide range of medical applications and exhibits characteristic interactions with X-rays. Therefore, Au can offer a particular advantage as a tracer and a contrast enhancer in X-ray imaging technologies by sensing the variation in X-ray attenuation in a given sample volume. This review summarizes basic understanding on X-ray imaging from device set-up to technologies. Then this review covers recent studies in the development of X-ray imaging techniques utilizing gold nanoparticles (AuNPs) and their relevant applications, including two- and three-dimensional biological imaging, dynamical processes in a living system, single cell-based imaging and quantitative analysis of circulatory systems and so on. In addition to conventional medical applications, various novel research areas have been developed and are expected to be further developed through AuNP-based X-ray imaging technologies.

  10. Unsupervised DInSAR processing chain for multi-scale displacement analysis

    NASA Astrophysics Data System (ADS)

    Casu, Francesco; Manunta, Michele

    2016-04-01

    Earth Observation techniques can be very helpful for the estimation of several sources of ground deformation due to their characteristics of large spatial coverage, high resolution and cost effectiveness. In this scenario, Differential Synthetic Aperture Radar Interferometry (DInSAR) is one of the most effective methodologies for its capability to generate spatially dense deformation maps at both global and local spatial scale, with centimeter to millimeter accuracy. DInSAR exploits the phase difference (interferogram) between SAR image pairs relevant to acquisitions gathered at different times, but with the same illumination geometry and from sufficiently close flight tracks, whose separation is typically referred to as baseline. Among several, the SBAS algorithm is one of the most used DInSAR approaches and it is aimed at generating displacement time series at a multi-scale level by exploiting a set of small baseline interferograms. SBAS, and generally DInSAR, has taken benefit from the large availability of spaceborne SAR data collected along years by several satellite systems, with particular regard to the European ERS and ENVISAT sensors, which have acquired SAR images worldwide during approximately 20 years. Moreover, since 2014 the new generation of Copernicus Sentinel satellites has started to acquire data with a short revisit time (12 days) and a global coverage policy, thus flooding the scientific EO community with an unprecedent amount of data. To efficiently manage such amount of data, proper processing facilities (as those coming from the emerging Cloud Computing technologies) have to be used, as well as novel algorithms aimed at their efficient exploitation have to be developed. In this work we present a set of results achieved by exploiting a recently proposed implementation of the SBAS algorithm, namely Parallel-SBAS (P-SBAS), which allows us to effectively process, in an unsupervised way and in a limited time frame, a huge number of SAR images, thus leading to the generation of Interferometric products for both global and local scale displacement analysis. Among several examples, we will show a wide displacement SBAS processing, carried out over the southern California, during which the whole ascending ENVISAT data set of more than 740 images has been fully processed on a Cloud Computing environment in less than 9 hours, leading to the generation of a displacement map of about 150,000 square kilometres. The P-SBAS characteristics allowed also us to integrate the algorithm within the ESA Geohazard Exploitation Platform (GEP), which is based on the use of GRID and Cloud Computing facilities, thus making freely available to the EO community a web tool for massive and systematic interferometric displacement time series generation. This work has been partially supported by: the Italian MIUR under the RITMARE project; the CNR-DPC agreement and the ESA GEP project.

  11. Common Data Format: New XML and Conversion Tools

    NASA Astrophysics Data System (ADS)

    Han, D. B.; Liu, M. H.; McGuire, R. E.

    2002-12-01

    Common Data Format (CDF) is a self-describing platform-independent data format for storing, accessing, and manipulating scalar and multidimensional scientific data sets. Significant benefit has accrued to specific science communities from their use of standard formats within those communities. Examples include the International Solar Terrestrial Physics (ISTP) community in using CDF for traditional space physics data (fields, particles and plasma, waves, and images), the worldwide astronomical community in using FITS (Flexible Image Transport System) for solar data (primarily spectral images), the NASA Planetary community in using Planetary Data System (PDS) Labels, and the earth science community in using Hierarchical Data Format (HDF). Scientific progress in solar-terrestrial physics continues to be impeded by the multiplicity of available standards for data formats and dearth of general data format translators. As a result, scientists today spend a significant amount of time translating data into the format they are familiar with for their research. To minimize this unnecessary data translation time and to allow more research time, the CDF office located at GSFC National Space Science Data Center (NSSDC) has developed HDF-to-CDF and FITS-to-CDF translators, and employed the eXtensible Markup Language (XML) technology to facilitate and promote data interoperability within the space science community. We will present the current status of the CDF work including the conversion tools that have been recently developed, conversion tools that are planned in the near future, share some of the XML experiences, and use the discussion to gain community feedback to our planned future work.

  12. Monitoring change in mountainous dry-heath vegetation at a regional scale using multitemporal Landsat TM data.

    PubMed

    Nordberg, Maj-Liz; Evertson, Joakim

    2003-12-01

    Vegetation cover-change analysis requires selection of an appropriate set of variables for measuring and characterizing change. Satellite sensors like Landsat TM offer the advantages of wide spatial coverage while providing land-cover information. This facilitates the monitoring of surface processes. This study discusses change detection in mountainous dry-heath communities in Jämtland County, Sweden, using satellite data. Landsat-5 TM and Landsat-7 ETM+ data from 1984, 1994 and 2000, respectively, were used. Different change detection methods were compared after the images had been radiometrically normalized, georeferenced and corrected for topographic effects. For detection of the classes change--no change the NDVI image differencing method was the most accurate with an overall accuracy of 94% (K = 0.87). Additional change information was extracted from an alternative method called NDVI regression analysis and vegetation change in 3 categories within mountainous dry-heath communities were detected. By applying a fuzzy set thresholding technique the overall accuracy was improved from of 65% (K = 0.45) to 74% (K = 0.59). The methods used generate a change product showing the location of changed areas in sensitive mountainous heath communities, and it also indicates the extent of the change (high, moderate and unchanged vegetation cover decrease). A total of 17% of the dry and extremely dry-heath vegetation within the study area has changed between 1984 and 2000. On average 4% of the studied heath communities have been classified as high change, i.e. have experienced "high vegetation cover decrease" during the period. The results show that the low alpine zone of the southern part of the study area shows the highest amount of "high vegetation cover decrease". The results also show that the main change occurred between 1994 and 2000.

  13. Measuring landscape-scale spread and persistence of an invaded submerged plant community from airborne remote sensing.

    PubMed

    Santos, Maria J; Khanna, Shruti; Hestir, Erin L; Greenberg, Jonathan A; Ustin, Susan L

    2016-09-01

    Processes of spread and patterns of persistence of invasive species affect species and communities in the new environment. Predicting future rates of spread is of great interest for timely management decisions, but this depends on models that rely on understanding the processes of invasion and historic observations of spread and persistence. Unfortunately, the rates of spread and patterns of persistence are difficult to model or directly observe, especially when multiple rates of spread and diverse persistence patterns may be co-occurring over the geographic distribution of the invaded ecosystem. Remote sensing systematically acquires data over large areas at fine spatial and spectral resolutions over multiple time periods that can be used to quantify spread processes and persistence patterns. We used airborne imaging spectroscopy data acquired once a year for 5 years from 2004 to 2008 to map an invaded submerged aquatic vegetation (SAV) community across 2220 km 2 of waterways in the Sacramento-San Joaquin River Delta, California, USA, and measured its spread rate and its persistence. Submerged aquatic vegetation covered 13-23 km 2 of the waterways (6-11%) every year. Yearly new growth accounted for 40-60% of the SAV area, ~50% of which survived to following year. Spread rates were overall negative and persistence decreased with time. From this dataset, we were able to identify both radial and saltatorial spread of the invaded SAV in the entire extent of the Delta over time. With both decreasing spread rate and persistence, it is possible that over time the invasion of this SAV community could decrease its ecological impact. A landscape-scale approach allows measurements of all invasion fronts and the spatial anisotropies associated with spread processes and persistence patterns, without spatial interpolation, at locations both proximate and distant to the focus of invasion at multiple points in time. © 2016 by the Ecological Society of America.

  14. The Calculation of Fractal Dimension in the Presence of Non-Fractal Clutter

    NASA Technical Reports Server (NTRS)

    Herren, Kenneth A.; Gregory, Don A.

    1999-01-01

    The area of information processing has grown dramatically over the last 50 years. In the areas of image processing and information storage the technology requirements have far outpaced the ability of the community to meet demands. The need for faster recognition algorithms and more efficient storage of large quantities of data has forced the user to accept less than lossless retrieval of that data for analysis. In addition to clutter that is not the object of interest in the data set, often the throughput requirements forces the user to accept "noisy" data and to tolerate the clutter inherent in that data. It has been shown that some of this clutter, both the intentional clutter (clouds, trees, etc) as well as the noise introduced on the data by processing requirements can be modeled as fractal or fractal-like. Traditional methods using Fourier deconvolution on these sources of noise in frequency space leads to loss of signal and can, in many cases, completely eliminate the target of interest. The parameters that characterize fractal-like noise (predominately the fractal dimension) have been investigated and a technique to reduce or eliminate noise from real scenes has been developed. Examples of clutter reduced images are presented.

  15. JunoCam: Science and Outreach Opportunities with Juno

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Orton, G. S.

    2015-12-01

    JunoCam is a visible imager on the Juno spacecraft en route to Jupiter. Although the primary role of the camera is for outreach, science objectives will be addressed too. JunoCam is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm. Juno's elliptical polar orbit will offer unique views of Jupiter's polar regions with a spatial scale of ~50 km/pixel. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color mages of the aurora will be acquired. Stereo images and images taken with the methane filter will allow us to estimate cloudtop heights. Resolution exceeds that of Cassini about an hour from closest approach and at closest approach images will have a spatial scale of ~3 km/pixel. JunoCam is a push-frame imager on a rotating spacecraft. The use of time-delayed integration takes advantage of the spacecraft spin to build up signal. JunoCam will acquire limb-to-limb views of Jupiter during a spacecraft rotation, and has the possibility of acquiring images of the rings from in-between Jupiter and the inner edge of the rings. Galilean satellite views will be fairly distant but some images will be acquired. Small ring moons Metis and Adrastea will also be imaged. The theme of our outreach is "science in a fish bowl", with an invitation to the science community and the public to participate. Amateur astronomers will supply their ground-based images for planning, so that we can predict when prominent atmospheric features will be visible. With the aid of professional astronomers observing at infrared wavelengths, we'll predict when hot spots will be visible to JunoCam. Amateur image processing enthusiasts are prepared to create image products. Between the planning and products will be the decision-making on what images to take when and why. We invite our colleagues to propose science questions for JunoCam to address, and to be part of the participatory process of deciding how to use our resources and scientifically analyze the data.

  16. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation model.

    PubMed

    Mongkolwat, Pattanasak; Kleper, Vladimir; Talbot, Skip; Rubin, Daniel

    2014-12-01

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

  17. Markov Dynamics as a Zooming Lens for Multiscale Community Detection: Non Clique-Like Communities and the Field-of-View Limit

    PubMed Central

    Schaub, Michael T.; Delvenne, Jean-Charles; Yaliraki, Sophia N.; Barahona, Mauricio

    2012-01-01

    In recent years, there has been a surge of interest in community detection algorithms for complex networks. A variety of computational heuristics, some with a long history, have been proposed for the identification of communities or, alternatively, of good graph partitions. In most cases, the algorithms maximize a particular objective function, thereby finding the ‘right’ split into communities. Although a thorough comparison of algorithms is still lacking, there has been an effort to design benchmarks, i.e., random graph models with known community structure against which algorithms can be evaluated. However, popular community detection methods and benchmarks normally assume an implicit notion of community based on clique-like subgraphs, a form of community structure that is not always characteristic of real networks. Specifically, networks that emerge from geometric constraints can have natural non clique-like substructures with large effective diameters, which can be interpreted as long-range communities. In this work, we show that long-range communities escape detection by popular methods, which are blinded by a restricted ‘field-of-view’ limit, an intrinsic upper scale on the communities they can detect. The field-of-view limit means that long-range communities tend to be overpartitioned. We show how by adopting a dynamical perspective towards community detection [1], [2], in which the evolution of a Markov process on the graph is used as a zooming lens over the structure of the network at all scales, one can detect both clique- or non clique-like communities without imposing an upper scale to the detection. Consequently, the performance of algorithms on inherently low-diameter, clique-like benchmarks may not always be indicative of equally good results in real networks with local, sparser connectivity. We illustrate our ideas with constructive examples and through the analysis of real-world networks from imaging, protein structures and the power grid, where a multiscale structure of non clique-like communities is revealed. PMID:22384178

  18. Community Oncology and Prevention Trials | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"168","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Image","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Image","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Early Detection Research Group Homepage Image","title":"Early

  19. Selective Imaging of Gram-Negative and Gram-Positive Microbiotas in the Mouse Gut.

    PubMed

    Wang, Wei; Zhu, Yuntao; Chen, Xing

    2017-08-01

    The diverse gut microbial communities are crucial for host health. How the interactions between microbial communities and between host and microbes influence the host, however, is not well understood. To facilitate gut microbiota research, selective imaging of specific groups of microbiotas in the gut is of great utility but remains technically challenging. Here we present a chemical approach that enables selective imaging of Gram-negative and Gram-positive microbiotas in the mouse gut by exploiting their distinctive cell wall components. Cell-selective labeling is achieved by the combined use of metabolic labeling of Gram-negative bacterial lipopolysaccharides with a clickable azidosugar and direct labeling of Gram-positive bacteria with a vancomycin-derivatized fluorescent probe. We demonstrated this strategy by two-color fluorescence imaging of Gram-negative and Gram-positive gut microbiotas in the mouse intestines. This chemical method should be broadly applicable to different gut microbiota research fields and other bacterial communities studied in microbiology.

  20. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  1. View synthesis using parallax invariance

    NASA Astrophysics Data System (ADS)

    Dornaika, Fadi

    2001-06-01

    View synthesis becomes a focus of attention of both the computer vision and computer graphics communities. It consists of creating novel images of a scene as it would appear from novel viewpoints. View synthesis can be used in a wide variety of applications such as video compression, graphics generation, virtual reality and entertainment. This paper addresses the following problem. Given a dense disparity map between two reference images, we would like to synthesize a novel view of the same scene associated with a novel viewpoint. Most of the existing work is relying on building a set of 3D meshes which are then projected onto the new image (the rendering process is performed using texture mapping). The advantages of our view synthesis approach are as follows. First, the novel view is specified by a rotation and a translation which are the most natural way to express the virtual location of the camera. Second, the approach is able to synthesize highly realistic images whose viewing position is significantly far away from the reference viewpoints. Third, the approach is able to handle the visibility problem during the synthesis process. Our developed framework has two main steps. The first step (analysis step) consists of computing the homography at infinity, the epipoles, and thus the parallax field associated with the reference images. The second step (synthesis step) consists of warping the reference image into a new one, which is based on the invariance of the computed parallax field. The analysis step is working directly on the reference views, and only need to be performed once. Examples of synthesizing novel views using either feature correspondences or dense disparity map have demonstrated the feasibility of the proposed approach.

  2. Integrating Remote Sensing Data, Hybrid-Cloud Computing, and Event Notifications for Advanced Rapid Imaging & Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.

  3. DSPACE hardware architecture for on-board real-time image/video processing in European space missions

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Donati, Massimiliano; Fanucci, Luca; Odendahl, Maximilian; Leupers, Reiner; Errico, Walter

    2013-02-01

    The on-board data processing is a vital task for any satellite and spacecraft due to the importance of elaborate the sensing data before sending them to the Earth, in order to exploit effectively the bandwidth to the ground station. In the last years the amount of sensing data collected by scientific and commercial space missions has increased significantly, while the available downlink bandwidth is comparatively stable. The increasing demand of on-board real-time processing capabilities represents one of the critical issues in forthcoming European missions. Faster and faster signal and image processing algorithms are required to accomplish planetary observation, surveillance, Synthetic Aperture Radar imaging and telecommunications. The only available space-qualified Digital Signal Processor (DSP) free of International Traffic in Arms Regulations (ITAR) restrictions faces inadequate performance, thus the development of a next generation European DSP is well known to the space community. The DSPACE space-qualified DSP architecture fills the gap between the computational requirements and the available devices. It leverages a pipelined and massively parallel core based on the Very Long Instruction Word (VLIW) paradigm, with 64 registers and 8 operational units, along with cache memories, memory controllers and SpaceWire interfaces. Both the synthesizable VHDL and the software development tools are generated from the LISA high-level model. A Xilinx-XC7K325T FPGA is chosen to realize a compact PCI demonstrator board. Finally first synthesis results on CMOS standard cell technology (ASIC 180 nm) show an area of around 380 kgates and a peak performance of 1000 MIPS and 750 MFLOPS at 125MHz.

  4. Remote Sensing Technologies for Estuary Research and Management (Invited)

    NASA Astrophysics Data System (ADS)

    Hestir, E. L.; Ustin, S.; Khanna, S.; Botha, E.; Santos, M. J.; Anstee, J.; Greenberg, J. A.

    2013-12-01

    Estuarine ecosystems and their biogeochemical processes are extremely vulnerable to climate and environmental changes, and are threatened by sea level rise and upstream activities such as land use/land cover and hydrological changes. Despite the recognized threat to estuaries, most aspects of how change will affect estuaries are not well understood due to the poorly resolved understanding of the complex physical, chemical and biological processes and their interactions in estuarine systems. New and innovative remote sensing technologies such as high spectral resolution optical and thermal imagers and lidar, microwave radiometers and radar imagers enable measurements of key environmental parameters needed to establish baseline conditions and improve modeling efforts. Radar's sensitivity to water provides information about water height and velocity, channel geometry and wetland inundation. Water surface temperature and salinity and can be measured from microwave radiometry, and when combined with radar-derived information can provide information about estuarine hydrodynamics. Optical and thermal hyperspectral imagers provide information about sediment, plant and water chemistry including chlorophyll, dissolved organic matter and mineralogical composition. Lidar can measure bathymetry, microtopography and emergent plant structure. Plant functional types, wetland community distributions, turbidity, suspended and deposited sediments, dissolved organic matter, water column chlorophyll and phytoplankton functional types may be estimated from these measurements. Innovative deployment of advanced remote sensing technologies on airborne and submersible un-piloted platforms provides temporally and spatially continuous measurement in temporally dynamic and spatially complex tidal systems. Through biophysically-based retrievals, these technologies provide direct measurement of physical, biological and biogeochemical conditions that can be used as models to understand estuarine processes and forecast responses to change. We demonstrate that innovative remote sensing technologies, coupled with long term datasets from satellite earth observing missions and in situ sensor networks provide the spatially contiguous measurements needed to make 'supra-regional' (e.g. river to coast) assessments of ecological communities, habitat distribution, ecosystem function, sediment, nutrient and carbon source and transport. We show that this information can be used to improve environmental modeling with increased confidence and support informed environmental management.

  5. Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data.

    PubMed

    Gu, Ke; Tao, Dacheng; Qiao, Jun-Fei; Lin, Weisi

    2018-04-01

    In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.

  6. Molecular Imaging of Vulnerable Atherosclerotic Plaques in Animal Models

    PubMed Central

    Gargiulo, Sara; Gramanzini, Matteo; Mancini, Marcello

    2016-01-01

    Atherosclerosis is characterized by intimal plaques of the arterial vessels that develop slowly and, in some cases, may undergo spontaneous rupture with subsequent heart attack or stroke. Currently, noninvasive diagnostic tools are inadequate to screen atherosclerotic lesions at high risk of acute complications. Therefore, the attention of the scientific community has been focused on the use of molecular imaging for identifying vulnerable plaques. Genetically engineered murine models such as ApoE−/− and ApoE−/−Fbn1C1039G+/− mice have been shown to be useful for testing new probes targeting biomarkers of relevant molecular processes for the characterization of vulnerable plaques, such as vascular endothelial growth factor receptor (VEGFR)-1, VEGFR-2, intercellular adhesion molecule (ICAM)-1, P-selectin, and integrins, and for the potential development of translational tools to identify high-risk patients who could benefit from early therapeutic interventions. This review summarizes the main animal models of vulnerable plaques, with an emphasis on genetically altered mice, and the state-of-the-art preclinical molecular imaging strategies. PMID:27618031

  7. USGS remote sensing coordination for the 2010 Haiti earthquake

    USGS Publications Warehouse

    Duda, Kenneth A.; Jones, Brenda

    2011-01-01

    In response to the devastating 12 January 2010, earthquake in Haiti, the US Geological Survey (USGS) provided essential coordinating services for remote sensing activities. Communication was rapidly established between the widely distributed response teams and data providers to define imaging requirements and sensor tasking opportunities. Data acquired from a variety of sources were received and archived by the USGS, and these products were subsequently distributed using the Hazards Data Distribution System (HDDS) and other mechanisms. Within six weeks after the earthquake, over 600,000 files representing 54 terabytes of data were provided to the response community. The USGS directly supported a wide variety of groups in their use of these data to characterize post-earthquake conditions and to make comparisons with pre-event imagery. The rapid and continuing response achieved was enabled by existing imaging and ground systems, and skilled personnel adept in all aspects of satellite data acquisition, processing, distribution and analysis. The information derived from image interpretation assisted senior planners and on-site teams to direct assistance where it was most needed.

  8. The Role of Computers in Research and Development at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol D. (Compiler)

    1994-01-01

    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics.

  9. Understanding moisture recycling for atmospheric river management in Amazonian communities

    NASA Astrophysics Data System (ADS)

    Weng, Wei; Luedeke, Matthias; Zemp, Delphine-Clara; Lakes, Tobia; Pradhan, Prajal; Kropp, Juergen

    2017-04-01

    The invisible atmospheric transports of moisture have recently attracted more research efforts into understanding their structures, processes involved and their function as an ecosystem service. Current attention has been focused on larger scale analysis such as studying global or continental level moisture recycling. Here we applied a water balance model to backtrack the flying river that sustains two local communities in the Colombian and Peruvian Amazon where vulnerable communities rely highly on the rainfall for agricultural practices. By utilising global precipitation (TRMM Multisatillite Precipitation Analysis; TMPA) and evapotranspiration products (Moderate Resolution Imaging Spectroradiometer MODIS, MOD16ET) as input data in the present modelling experiments to compensate the sparse ground observation data in these regions, the moisture recycling process targeting the two amazonian communities which has not yet been explored quantitatively has been shown. The TMPA was selected because of its proved comparativeness with observation data in its precipitation estimations over Amazon regions while the MOD16ET data was chosen for being validated by previous studies in the Amazon basin and for reported good performance. In average, 45.5 % of the precipitation occurring to Caquetá region in Colombia is of terrestrial origin from the South American continent while 48.2% of the total rainfall received by Peruvian Yurimaguas is also from the South American land sources. The spatial distribution of the precipitationsheds (defined previously as the upwind contribution of evapotranspiration to a specific location's precipitation) shows transboundary and transnational shares in the moisture contributors of the precipitation for both regions. An interesting reversed upstream-downstream roles can be observed when the upstream regions in traditional watershed thinking become downstream areas considering precipitationsheds and flying rivers. Strong seasonal variations are also detected by our results. Since undergoing rapid land cultivation expansion in the precipitationsheds of these study areas can potentially alter the moisture recycling process which sustains ecosystem and communities, the tele-connection linking the contributors and recipients presented in this study has highlighted that region-wise collaboration and communication will be essential for an adaptive Amazonia facing environmental change, especially in regards to its vulnerable communities.

  10. Alcohol advertising and violence against women: a media advocacy case study.

    PubMed

    Woodruff, K

    1996-08-01

    This article describes one effort to help prevent violence against women by addressing some of the larger societal factors involved. The Dangerous Promises campaign is based on the premise that sexist advertising images contribute to an environment conducive to violence against women. The goal of the campaign is to convince alcohol companies to eliminate sexist alcohol advertising and promotions. Using the tools of community organizing and media advocacy, the campaign pressures the alcohol industry to change the ways in which they portray women in much of their advertising. Media advocacy has been instrumental in the successes of the campaign. This article examines the strategies and outcomes of the Dangerous Promises efforts to date and makes a case for application of media advocacy as a tool for increasing community voice in policy-making processes.

  11. TESS Follow-up Observing Program (TFOP) Working Group:A Mission-led Effort to Coordinate Community Resources to Confirm TESS Planets

    NASA Astrophysics Data System (ADS)

    Collins, Karen; Quinn, Samuel N.; Latham, David W.; Christiansen, Jessie; Ciardi, David; Dragomir, Diana; Crossfield, Ian; Seager, Sara

    2018-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will observe most of the sky over a period of two years. Observations will be conducted in 26 sectors of sky coverage and each sector will be observed for ~27 days. Data from each sector is expected to produce hundreds of transiting planet candidates (PCs) per month and thousands over the two year nominal mission. The TFOP Working Group (WG) is a mission-led effort organized to efficiently provide follow-up observations to confirm candidates as planets or reject them as false positives. The primary goal of the TFOP WG is to facilitate achievement of the Level One Science Requirement to measure masses for 50 transiting planets smaller than 4 Earth radii. Secondary goals are to serve any science coming out of TESS and to foster communication and coordination both within the TESS Science Team and with the community at large. The TFOP WG is organized as five Sub Groups (SGs). SG1 will provide seeing-limited imaging to measure blending within a candidate's aperture and time-series photometry to identify false positives and in some cases to improve ephemerides, light curves, and/or transit time variation (TTV) measurements. SG2 will provide reconnaissance spectroscopy to identify astrophysical false positives and to contribute to improved host star parameters. SG3 will provide high-resolution imaging with adaptive optics, speckle imaging, and lucky imaging to detect nearby objects. SG4 will provide precise radial velocities to derive orbits of planet(s) and measure their mass(es) relative to the host star. SG5 will provide space-based photometry to confirm and/or improve the TESS photometric ephemerides, and will also provide improved light curves for transit events or TTV measurements. We describe the TFOP WG observing and planet confirmation process, the five SGs that comprise the TFOP WG, ExoFOP-TESS and other web-based tools being developed to support TFOP WG observers, other advantages of joining the TFOP WG, the TFOP WG charter and publication policy, preferred capabilities of SG team members, and the TFOP WG application process.

  12. MultiSpec: A Desktop and Online Geospatial Image Data Processing Tool

    NASA Astrophysics Data System (ADS)

    Biehl, L. L.; Hsu, W. K.; Maud, A. R. M.; Yeh, T. T.

    2017-12-01

    MultiSpec is an easy to learn and use, freeware image processing tool for interactively analyzing a broad spectrum of geospatial image data, with capabilities such as image display, unsupervised and supervised classification, feature extraction, feature enhancement, and several other functions. Originally developed for Macintosh and Windows desktop computers, it has a community of several thousand users worldwide, including researchers and educators, as a practical and robust solution for analyzing multispectral and hyperspectral remote sensing data in several different file formats. More recently MultiSpec was adapted to run in the HUBzero collaboration platform so that it can be used within a web browser, allowing new user communities to be engaged through science gateways. MultiSpec Online has also been extended to interoperate with other components (e.g., data management) in HUBzero through integration with the geospatial data building blocks (GABBs) project. This integration enables a user to directly launch MultiSpec Online from data that is stored and/or shared in a HUBzero gateway and to save output data from MultiSpec Online to hub storage, allowing data sharing and multi-step workflows without having to move data between different systems. MultiSpec has also been used in K-12 classes for which one example is the GLOBE program (www.globe.gov) and in outreach material such as that provided by the USGS (eros.usgs.gov/educational-activities). MultiSpec Online now provides teachers with another way to use MultiSpec without having to install the desktop tool. Recently MultiSpec Online was used in a geospatial data session with 30-35 middle school students at the Turned Onto Technology and Leadership (TOTAL) Camp in the summers of 2016 and 2017 at Purdue University. The students worked on a flood mapping exercise using Landsat 5 data to learn about land remote sensing using supervised classification techniques. Online documentation is available for MultiSpec (engineering.purdue.edu/ biehl/MultiSpec/) including a reference manual and several tutorials allowing young high-school students through research faculty to learn the basic functions in MultiSpec. Some of the tutorials have been translated to other languages by MultiSpec users.

  13. Advances in diffusion MRI acquisition and processing in the Human Connectome Project

    PubMed Central

    Sotiropoulos, Stamatios N; Jbabdi, Saad; Xu, Junqian; Andersson, Jesper L; Moeller, Steen; Auerbach, Edward J; Glasser, Matthew F; Hernandez, Moises; Sapiro, Guillermo; Jenkinson, Mark; Feinberg, David A; Yacoub, Essa; Lenglet, Christophe; Ven Essen, David C; Ugurbil, Kamil; Behrens, Timothy EJ

    2013-01-01

    The Human Connectome Project (HCP) is a collaborative 5-year effort to map human brain connections and their variability in healthy adults. A consortium of HCP investigators will study a population of 1200 healthy adults using multiple imaging modalities, along with extensive behavioral and genetic data. In this overview, we focus on diffusion MRI (dMRI) and the structural connectivity aspect of the project. We present recent advances in acquisition and processing that allow us to obtain very high-quality in-vivo MRI data, while enabling scanning of a very large number of subjects. These advances result from 2 years of intensive efforts in optimising many aspects of data acquisition and processing during the piloting phase of the project. The data quality and methods described here are representative of the datasets and processing pipelines that will be made freely available to the community at quarterly intervals, beginning in 2013. PMID:23702418

  14. Social Equality in Mass Higher Education: Connecticut Community Colleges.

    ERIC Educational Resources Information Center

    Abel, Emily K.

    The rhetoric of the community colleges presents them as democratizing agents, enabling the underprivileged to move upward in society through education. While this is their purpose, the community colleges also aspire to gain acceptance as regular members of the system of higher education. In Connecticut, the image of the community colleges suffers…

  15. Positioning Community Colleges via Economic Development. ERIC Digest.

    ERIC Educational Resources Information Center

    Zeiss, Anthony

    Community colleges, because of their late arrival in the development of American education, have suffered from an image and identity problem since their inception. To deal with this problem, community colleges should position themselves as unique community-based service-oriented colleges and market a specific focus to the general public. The first…

  16. Constituent Perceptions of a Community College: An "Image" Study.

    ERIC Educational Resources Information Center

    Conklin, Karen A.

    Every 5 years, Johnson County Community College (JCCC), in Overland Park, Kansas, conducts a study of community perceptions to measure the level of community satisfaction with the overall mission of the college. Specifically, the studies seek to measure constituents' awareness of JCCC's role, their support of the college's activities to fulfill…

  17. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2005-01-01

    This paper considers the deceptively simple question: Why can't digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  18. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2004-12-01

    This paper considers the deceptively simple question: Why can"t digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  19. Target discrimination of man-made objects using passive polarimetric signatures acquired in the visible and infrared spectral bands

    NASA Astrophysics Data System (ADS)

    Lavigne, Daniel A.; Breton, Mélanie; Fournier, Georges; Charette, Jean-François; Pichette, Mario; Rivet, Vincent; Bernier, Anne-Pier

    2011-10-01

    Surveillance operations and search and rescue missions regularly exploit electro-optic imaging systems to detect targets of interest in both the civilian and military communities. By incorporating the polarization of light as supplementary information to such electro-optic imaging systems, it is possible to increase their target discrimination capabilities, considering that man-made objects are known to depolarized light in different manner than natural backgrounds. As it is known that electro-magnetic radiation emitted and reflected from a smooth surface observed near a grazing angle becomes partially polarized in the visible and infrared wavelength bands, additional information about the shape, roughness, shading, and surface temperatures of difficult targets can be extracted by processing effectively such reflected/emitted polarized signatures. This paper presents a set of polarimetric image processing algorithms devised to extract meaningful information from a broad range of man-made objects. Passive polarimetric signatures are acquired in the visible, shortwave infrared, midwave infrared, and longwave infrared bands using a fully automated imaging system developed at DRDC Valcartier. A fusion algorithm is used to enable the discrimination of some objects lying in shadowed areas. Performance metrics, derived from the computed Stokes parameters, characterize the degree of polarization of man-made objects. Field experiments conducted during winter and summer time demonstrate: 1) the utility of the imaging system to collect polarized signatures of different objects in the visible and infrared spectral bands, and 2) the enhanced performance of target discrimination and fusion algorithms to exploit the polarized signatures of man-made objects against cluttered backgrounds.

  20. Modular spectral imaging system for discrimination of pigments in cells and microbial communities.

    PubMed

    Polerecky, Lubos; Bissett, Andrew; Al-Najjar, Mohammad; Faerber, Paul; Osmers, Harald; Suci, Peter A; Stoodley, Paul; de Beer, Dirk

    2009-02-01

    Here we describe a spectral imaging system for minimally invasive identification, localization, and relative quantification of pigments in cells and microbial communities. The modularity of the system allows pigment detection on spatial scales ranging from the single-cell level to regions whose areas are several tens of square centimeters. For pigment identification in vivo absorption and/or autofluorescence spectra are used as the analytical signals. Along with the hardware, which is easy to transport and simple to assemble and allows rapid measurement, we describe newly developed software that allows highly sensitive and pigment-specific analyses of the hyperspectral data. We also propose and describe a number of applications of the system for microbial ecology, including identification of pigments in living cells and high-spatial-resolution imaging of pigments and the associated phototrophic groups in complex microbial communities, such as photosynthetic endolithic biofilms, microbial mats, and intertidal sediments. This system provides new possibilities for studying the role of spatial organization of microorganisms in the ecological functioning of complex benthic microbial communities or for noninvasively monitoring changes in the spatial organization and/or composition of a microbial community in response to changing environmental factors.

  1. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  2. Methods and measurement variance for field estimations of coral colony planar area using underwater photographs and semi-automated image segmentation.

    PubMed

    Neal, Benjamin P; Lin, Tsung-Han; Winter, Rivah N; Treibitz, Tali; Beijbom, Oscar; Kriegman, David; Kline, David I; Greg Mitchell, B

    2015-08-01

    Size and growth rates for individual colonies are some of the most essential descriptive parameters for understanding coral communities, which are currently experiencing worldwide declines in health and extent. Accurately measuring coral colony size and changes over multiple years can reveal demographic, growth, or mortality patterns often not apparent from short-term observations and can expose environmental stress responses that may take years to manifest. Describing community size structure can reveal population dynamics patterns, such as periods of failed recruitment or patterns of colony fission, which have implications for the future sustainability of these ecosystems. However, rapidly and non-invasively measuring coral colony sizes in situ remains a difficult task, as three-dimensional underwater digital reconstruction methods are currently not practical for large numbers of colonies. Two-dimensional (2D) planar area measurements from projection of underwater photographs are a practical size proxy, although this method presents operational difficulties in obtaining well-controlled photographs in the highly rugose environment of the coral reef, and requires extensive time for image processing. Here, we present and test the measurement variance for a method of making rapid planar area estimates of small to medium-sized coral colonies using a lightweight monopod image-framing system and a custom semi-automated image segmentation analysis program. This method demonstrated a coefficient of variation of 2.26% for repeated measurements in realistic ocean conditions, a level of error appropriate for rapid, inexpensive field studies of coral size structure, inferring change in colony size over time, or measuring bleaching or disease extent of large numbers of individual colonies.

  3. Combining Footwear with Public Health Iconography to Prevent Soil-Transmitted Helminth Infections.

    PubMed

    Paige, Sarah B; Friant, Sagan; Clech, Lucie; Malavé, Carly; Kemigabo, Catherine; Obeti, Richard; Goldberg, Tony L

    2017-01-11

    Shoes are effective for blocking soil-transmitted helminths (STHs) that penetrate the skin. Unfortunately, shoe-wearing is uncommon in many areas where STHs are prevalent, in part because local populations are unaware of the health benefits of wearing shoes. This is especially true in low-literacy populations, where information dissemination through written messages is not possible. We launched a public health intervention that combines a public health image with sandals. The image is a "lenticular image" that combines two alternating pictures to depict the efficacy of shoes for preventing STH infection. This image is adhered to the shoe, such that the message is linked directly to the primary means of prevention. To create a culturally appropriate image, we conducted five focus group discussions, each with a different gender and age combination. Results of focus group discussions reinforced the importance of refining public health messages well in advance of distribution so that cultural acceptability is strong. After the image was finalized, we deployed shoes with the image in communities in western Uganda where hookworm is prevalent. We found that the frequency of shoe-wearing was 25% higher in communities receiving the shoes than in control communities. Microscopic analyses of fecal samples for parasites showed a sustained reduction in infection intensity for parasites transmitted directly through the feet when people received shoes with a public health image. Our results show that combining culturally appropriate images with public health interventions can be effective in low-literacy populations. © The American Society of Tropical Medicine and Hygiene.

  4. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible. PMID:23637895

  5. Applying NASA Imaging Radar Datasets to Investigate the Geomorphology of the Amazon's Planalto

    NASA Astrophysics Data System (ADS)

    McDonald, K. C.; Campbell, K.; Islam, R.; Alexander, P. M.; Cracraft, J.

    2016-12-01

    The Amazon basin is a biodiversity rich biome and plays a significant role into shaping Earth's climate, ocean and atmospheric gases. Understanding the history of the formation of this basin is essential to our understanding of the region's biodiversity and its response to climate change. During March 2013, the NASA/JPL L-band polarimetric airborne imaging radar, UAVSAR, conducted airborne studies over regions of South America including portions of the western Amazon basin. We utilize UAVSAR imagery acquired during that time over the Planalto, in the Madre de Dios region of southeastern Peru in an assessment of the underlying geomorphology, its relationship to the current distribution of vegetation, and its relationship to geologic processes through deep time. We employ UAVSAR data collections to assess the utility of these high quality imaging radar data for use in identifying geomorphologic features and vegetation communities within the context of improving the understanding of evolutionary processes, and their utility in aiding interpretation of datasets from Earth-orbiting satellites to support a basin-wide characterization across the Amazon. We derive maps of landcover and river branching structure from UAVSAR imagery. We compare these maps to those derived using imaging radar datasets from the Japanese Space Agency's ALOS PALSAR and Digital Elevation Models (DEMs) from NASA's Shuttle Radar Topography Mission (SRTM). Results provide an understanding of the underlying geomorphology of the Amazon planalto as well as its relationship to geologic processes and will support interpretation of the evolutionary history of the Amazon Basin. Portions of this work have been carried out within the framework of the ALOS Kyoto & Carbon Initiative. PALSAR data were provided by JAXA/EORC and the Alaska Satellite Facility.This work is carried out with support from the NASA Biodiversity Program and the NSF DIMENSIONS of Biodiversity Program.

  6. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.

  7. Some practicable applications of quadtree data structures/representation in astronomy

    NASA Technical Reports Server (NTRS)

    Pasztor, L.

    1992-01-01

    Development of quadtree as hierarchical data structuring technique for representing spatial data (like points, regions, surfaces, lines, curves, volumes, etc.) has been motivated to a large extent by storage requirements of images, maps, and other multidimensional (spatially structured) data. For many spatial algorithms, time-efficiency of quadtrees in terms of execution may be as important as their space-efficiency concerning storage conditions. Briefly, the quadtree is a class of hierarchical data structures which is based on the recursive partition of a square region into quadrants and sub-quadrants until a predefined limit. Beyond the wide applicability of quadtrees in image processing, spatial information analysis, and building digital databases (processes becoming ordinary for the astronomical community), there may be numerous further applications in astronomy. Some of these practicable applications based on quadtree representation of astronomical data are presented and suggested for further considerations. Examples are shown for use of point as well as region quadtrees. Statistics of different leaf and non-leaf nodes (homogeneous and heterogeneous sub-quadrants respectively) at different levels may provide useful information on spatial structure of astronomical data in question. By altering the principle guiding the decomposition process, different types of spatial data may be focused on. Finally, a sampling method based on quadtree representation of an image is proposed which may prove to be efficient in the elaboration of sampling strategy in a region where observations were carried out previously either with different resolution or/and in different bands.

  8. The Synthetic Aperture Radar Science Data Processing Foundry Concept for Earth Science

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Hua, H.; Norton, C. D.; Little, M. M.

    2015-12-01

    Since 2008, NASA's Earth Science Technology Office and the Advanced Information Systems Technology Program have invested in two technology evolutions to meet the needs of the community of scientists exploiting the rapidly growing database of international synthetic aperture radar (SAR) data. JPL, working with the science community, has developed the InSAR Scientific Computing Environment (ISCE), a next-generation interferometric SAR processing system that is designed to be flexible and extensible. ISCE currently supports many international space borne data sets but has been primarily focused on geodetic science and applications. A second evolutionary path, the Advanced Rapid Imaging and Analysis (ARIA) science data system, uses ISCE as its core science data processing engine and produces automated science and response products, quality assessments and metadata. The success of this two-front effort has been demonstrated in NASA's ability to respond to recent events with useful disaster support. JPL has enabled high-volume and low latency data production by the re-use of the hybrid cloud computing science data system (HySDS) that runs ARIA, leveraging on-premise cloud computing assets that are able to burst onto the Amazon Web Services (AWS) services as needed. Beyond geodetic applications, needs have emerged to process large volumes of time-series SAR data collected for estimation of biomass and its change, in such campaigns as the upcoming AfriSAR field campaign. ESTO is funding JPL to extend the ISCE-ARIA model to a "SAR Science Data Processing Foundry" to on-ramp new data sources and to produce new science data products to meet the needs of science teams and, in general, science community members. An extension of the ISCE-ARIA model to support on-demand processing will permit PIs to leverage this Foundry to produce data products from accepted data sources when they need them. This paper will describe each of the elements of the SAR SDP Foundry and describe their integration into a new conceptual approach to enable more effective use of SAR instruments.

  9. The Effects of Bad and Good News on Newspaper Image and Community Image. A Report from the Communications Research Center.

    ERIC Educational Resources Information Center

    Haskins , Jack B.

    A study tested the hypotheses that the relative amount of bad news and good news in a newspaper would have corresponding effects on perceptions of the newspaper's community of origin and of the newspaper itself. Five different versions of a realistic four-page newspaper were created, in which treatment of the news stories ranged from an…

  10. In Vogue: How Valencia Community College Used a High-Fashion Marketing Campaign to Sharpen Its Image

    ERIC Educational Resources Information Center

    Campagnuolo, Christian

    2008-01-01

    Not unlike many community colleges across the country, Valencia Community College, located in Orlando, Florida, has been working to better connect with its constituents. In an era in which the Internet is opening new lines of communication between schools and prospective students, more community colleges are tapping into the opportunities inherent…

  11. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics.

    PubMed

    Sharma, Harshita; Alekseychuk, Alexander; Leskovsky, Peter; Hellwich, Olaf; Anand, R S; Zerbe, Norman; Hufnagl, Peter

    2012-10-04

    Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923.

  12. Magnetic resonance neurography and diffusion tensor imaging: origins, history, and clinical impact of the first 50,000 cases with an assessment of efficacy and utility in a prospective 5000-patient study group.

    PubMed

    Filler, Aaron

    2009-10-01

    Methods were invented that made it possible to image peripheral nerves in the body and to image neural tracts in the brain. The history, physical basis, and dyadic tensor concept underlying the methods are reviewed. Over a 15-year period, these techniques-magnetic resonance neurography (MRN) and diffusion tensor imaging-were deployed in the clinical and research community in more than 2500 published research reports and applied to approximately 50,000 patients. Within this group, approximately 5000 patients having MRN were carefully tracked on a prospective basis. A uniform Neurography imaging methodology was applied in the study group, and all images were reviewed and registered by referral source, clinical indication, efficacy of imaging, and quality. Various classes of image findings were identified and subjected to a variety of small targeted prospective outcome studies. Those findings demonstrated to be clinically significant were then tracked in the larger clinical volume data set. MRN demonstrates mechanical distortion of nerves, hyperintensity consistent with nerve irritation, nerve swelling, discontinuity, relations of nerves to masses, and image features revealing distortion of nerves at entrapment points. These findings are often clinically relevant and warrant full consideration in the diagnostic process. They result in specific pathological diagnoses that are comparable to electrodiagnostic testing in clinical efficacy. A review of clinical outcome studies with diffusion tensor imaging also shows convincing utility. MRN and diffusion tensor imaging neural tract imaging have been validated as indispensable clinical diagnostic methods that provide reliable anatomic pathological information. There is no alternative diagnostic method in many situations. With the elapsing of 15 years, tens of thousands of imaging studies, and thousands of publications, these methods should no longer be considered experimental.

  13. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics

    PubMed Central

    2012-01-01

    Background Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. Methods The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. Results The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. Conclusion The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923. PMID:23035717

  14. Development of a Medical Cyclotron Production Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Danny R.

    Development of a Cyclotron manufacturing facility begins with a business plan. Geographics, the size and activity of the medical community, the growth potential of the modality being served, and other business connections are all considered. This business used the customer base established by NuTech, Inc., an independent centralized nuclear pharmacy founded by Danny Allen. With two pharmacies in operation in Tyler and College Station and a customer base of 47 hospitals and clinics the existing delivery system and pharmacist staff is used for the cyclotron facility. We then added cyclotron products to contracts with these customers to guarantee a supply.more » We partnered with a company in the process of developing PET imaging centers. We then built an independent imaging center attached to the cyclotron facility to allow for the use of short-lived isotopes.« less

  15. Imaging and Analytical Approaches for Characterization of Soil Mineral Weathering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dohnalkova, Alice; Arey, Bruce; Varga, Tamas

    Soil minerals weathering is the primary natural source of nutrients necessary to sustain productivity in terrestrial ecosystems. Soil microbial communities increase soil mineral weathering and mineral-derived nutrient availability through physical and chemical processes. Rhizosphere, the zone immediately surrounding plant roots, is a biogeochemical hotspot with microbial activity, soil organic matter production, mineral weathering, and secondary phase formation all happening in a small temporally ephemeral zone of steep geochemical gradients. The detailed exploration of the micro-scale rhizosphere is essential to our better understanding of large-scale processes in soils, such as nutrient cycling, transport and fate of soil components, microbial-mineral interactions, soilmore » erosion, soil organic matter turnover and its molecular-level characterization, and predictive modeling.« less

  16. A Survey of Memristive Threshold Logic Circuits.

    PubMed

    Maan, Akshay Kumar; Jayadevi, Deepthi Anirudhan; James, Alex Pappachen

    2017-08-01

    In this paper, we review different memristive threshold logic (MTL) circuits that are inspired from the synaptic action of the flow of neurotransmitters in the biological brain. The brainlike generalization ability and the area minimization of these threshold logic circuits aim toward crossing Moore's law boundaries at device, circuits, and systems levels. Fast switching memory, signal processing, control systems, programmable logic, image processing, reconfigurable computing, and pattern recognition are identified as some of the potential applications of MTL systems. The physical realization of nanoscale devices with memristive behavior from materials, such as TiO 2 , ferroelectrics, silicon, and polymers, has accelerated research effort in these application areas, inspiring the scientific community to pursue the design of high-speed, low-cost, low-power, and high-density neuromorphic architectures.

  17. Integration of Remote Sensing Products with Ground-Based Measurements to Understand the Dynamics of Nepal's Forests and Plantation Sites

    NASA Astrophysics Data System (ADS)

    Gilani, H.; Jain, A. K.

    2016-12-01

    This study assembles information from three sources - remote sensing, terrestrial photography and ground-based inventory data, to understand the dynamics of Nepal's tropical and sub-tropical forests and plantation sites for the period 1990-2015. Our study focuses on following three specific district areas, which have conserved forests through social and agroforestry management practices: 1. Dolakha district: This site has been selected to study the impact of community-based forest management on land cover change using repeat photography and satellite imagery, in combination with interviews with community members. The study time period is during the period 1990-2010. We determined that satellite data with ground photographs can provide transparency for long term monitoring. The initial results also suggests that community-based forest management program in the mid-hills of Nepal was successful. 2. Chitwan district: Here we use high resolution remote sensing data and optimized community field inventories to evaluate potential application and operational feasibility of community level REDD+ measuring, reporting and verification (MRV) systems. The study uses temporal dynamics of land cover transitions, tree canopy size classes and biomass over a Kayar khola watershed REDD+ study area with community forest to evaluate satellite Image segmentation for land cover, linear regression model for above ground biomass (AGB), and estimation and monitoring field data for tree crowns and AGB. We study three specific years 2002, 2009, 2012. Using integration of WorldView-2 and airborne LiDAR data for tree species level. 3. Nuwakot district: This district was selected to study the impact of establishment of tree plantation on total barren/fallow. Over the last 40 year, this area has went through a drastic changes, from barren land to forest area with tree species consisting of Dalbergia sissoo, Leucaena leucocephala, Michelia champaca, etc. In 1994, this district area was registered and established to grow and process high quality trees shaded of Arabica coffee beans. Here we use temporal satellite images and repeat terrestrial and aerial photographs, along with plot level biomass to show impact of this positive transformation of the landscape on above and below ground carbon masses. The study time period is 1990-2015.

  18. The Yellow Sea [high res

    NASA Image and Video Library

    2015-02-27

    Remote sensing of ocean color in the Yellow Sea can be a challenge. Phytoplankton, suspended sediments, and dissolved organic matter color the water while various types of aerosols modify those colors before they are "seen" by orbiting radiometers. The Aqua-MODIS data used to create the above image were collected on February 24, 2015. NASA's OceanColor Web is supported by the Ocean Biology Processing Group (OBPG) at NASA's Goddard Space Flight Center. Our responsibilities include the collection, processing, calibration, validation, archive and distribution of ocean-related products from a large number of operational, satellite-based remote-sensing missions providing ocean color, sea surface temperature and sea surface salinity data to the international research community since 1996. Credit: NASA/Goddard/Ocean Color NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  19. How operational issues impact science peer review

    NASA Astrophysics Data System (ADS)

    Blacker, Brett S.; Golombek, Daniel; Macchetto, Duccio

    2006-06-01

    In some eyes, the Phase I proposal selection process is the most important activity handled by the Space Telescope Science Institute (STScI). Proposing for HST and other missions consists of requesting observing time and/or archival research funding. This step is called Phase I, where the scientific merit of a proposal is considered by a community based peer-review process. Accepted proposals then proceed thru Phase II, where the observations are specified in sufficient detail to enable scheduling on the telescope. Each cycle the Hubble Space Telescope (HST) Telescope Allocation Committee (TAC) reviews proposals and awards observing time that is valued at $0.5B, when the total expenditures for HST over its lifetime are figured on an annual basis. This is in fact a very important endeavor that we continue to fine-tune and tweak. This process is open to the science community and we constantly receive comments and praise for this process. In this last year we have had to deal with the loss of the Space Telescope Imaging Spectrograph (STIS) and move from 3-gyro operations to 2-gyro operations. This paper will outline how operational issues impact the HST science peer review process. We will discuss the process that was used to recover from the loss of the STIS instrument and how we dealt with the loss of 1/3 of the current science observations. We will also discuss the issues relating to 3-gyro vs. 2-gyro operations and how that changes impacted Proposers, our in-house processing and the TAC.

  20. Beyond the continuum: a multi-dimensional phase space for neutral-niche community assembly.

    PubMed

    Latombe, Guillaume; Hui, Cang; McGeoch, Melodie A

    2015-12-22

    Neutral and niche processes are generally considered to interact in natural communities along a continuum, exhibiting community patterns bounded by pure neutral and pure niche processes. The continuum concept uses niche separation, an attribute of the community, to test the hypothesis that communities are bounded by pure niche or pure neutral conditions. It does not accommodate interactions via feedback between processes and the environment. By contrast, we introduce the Community Assembly Phase Space (CAPS), a multi-dimensional space that uses community processes (such as dispersal and niche selection) to define the limiting neutral and niche conditions and to test the continuum hypothesis. We compare the outputs of modelled communities in a heterogeneous landscape, assembled by pure neutral, pure niche and composite processes. Differences in patterns under different combinations of processes in CAPS reveal hidden complexity in neutral-niche community dynamics. The neutral-niche continuum only holds for strong dispersal limitation and niche separation. For weaker dispersal limitation and niche separation, neutral and niche processes amplify each other via feedback with the environment. This generates patterns that lie well beyond those predicted by a continuum. Inferences drawn from patterns about community assembly processes can therefore be misguided when based on the continuum perspective. CAPS also demonstrates the complementary information value of different patterns for inferring community processes and captures the complexity of community assembly. It provides a general tool for studying the processes structuring communities and can be applied to address a range of questions in community and metacommunity ecology. © 2015 The Author(s).

  1. Beyond the continuum: a multi-dimensional phase space for neutral–niche community assembly

    PubMed Central

    Latombe, Guillaume; McGeoch, Melodie A.

    2015-01-01

    Neutral and niche processes are generally considered to interact in natural communities along a continuum, exhibiting community patterns bounded by pure neutral and pure niche processes. The continuum concept uses niche separation, an attribute of the community, to test the hypothesis that communities are bounded by pure niche or pure neutral conditions. It does not accommodate interactions via feedback between processes and the environment. By contrast, we introduce the Community Assembly Phase Space (CAPS), a multi-dimensional space that uses community processes (such as dispersal and niche selection) to define the limiting neutral and niche conditions and to test the continuum hypothesis. We compare the outputs of modelled communities in a heterogeneous landscape, assembled by pure neutral, pure niche and composite processes. Differences in patterns under different combinations of processes in CAPS reveal hidden complexity in neutral–niche community dynamics. The neutral–niche continuum only holds for strong dispersal limitation and niche separation. For weaker dispersal limitation and niche separation, neutral and niche processes amplify each other via feedback with the environment. This generates patterns that lie well beyond those predicted by a continuum. Inferences drawn from patterns about community assembly processes can therefore be misguided when based on the continuum perspective. CAPS also demonstrates the complementary information value of different patterns for inferring community processes and captures the complexity of community assembly. It provides a general tool for studying the processes structuring communities and can be applied to address a range of questions in community and metacommunity ecology. PMID:26702047

  2. Paranoia Symptoms Moderate the Impact of Emotional Context Processing on Community Functioning of Individuals with Schizophrenia.

    PubMed

    Park, Kiho; Choi, Kee-Hong

    2018-04-26

    This study examined whether better emotional context processing is associated with better community functioning among persons with schizophrenia, and whether the relationship between the two variables is moderated by level of paranoid symptoms. The Brief Psychiatric Rating Scale-Expanded Version, Emotional Context Processing Scale, and Multnomah Community Ability Scale were administered to 39 community-dwelling participants with schizophrenia or schizoaffective disorder. Emotional context processing had a small-to-moderate association with community functioning. However, the association between emotional context processing and community functioning was moderated by level of paranoid symptoms. Emotional context processing in participants with mild paranoid symptoms was strongly associated with better community functioning, whereas emotional context processing in those with severe paranoid symptoms was not. Emotional context processing and the degree of paranoia should be considered in treatment plans designed to enhance the community functioning of individuals with schizophrenia to help them improve their understanding of social situations.

  3. Exploiting Satellite Archives to Estimate Global Glacier Volume Changes

    NASA Astrophysics Data System (ADS)

    McNabb, R. W.; Nuth, C.; Kääb, A.; Girod, L.

    2017-12-01

    In the past decade, the availability of, and ability to process, remote sensing data over glaciers has expanded tremendously. Newly opened satellite image archives, combined with new processing techniques as well as increased computing power and storage capacity, have given the glaciological community the ability to observe and investigate glaciological processes and changes on a truly global scale. In particular, the opening of the ASTER archives provides further opportunities to both estimate and monitor glacier elevation and volume changes globally, including potentially on sub-annual timescales. With this explosion of data availability, however, comes the challenge of seeing the forest instead of the trees. The high volume of data available means that automated detection and proper handling of errors and biases in the data becomes critical, in order to properly study the processes that we wish to see. This includes holes and blunders in digital elevation models (DEMs) derived from optical data or penetration of radar signals leading to biases in DEMs derived from radar data, among other sources. Here, we highlight new advances in the ability to sift through high-volume datasets, and apply these techniques to estimate recent glacier volume changes in the Caucasus Mountains, Scandinavia, Africa, and South America. By properly estimating and correcting for these biases, we additionally provide a detailed accounting of the uncertainties in these estimates of volume changes, leading to more reliable results that have applicability beyond the glaciological community.

  4. Psychophysical evaluation of the image quality of a dynamic flat-panel digital x-ray image detector using the threshold contrast detail detectability (TCDD) technique

    NASA Astrophysics Data System (ADS)

    Davies, Andrew G.; Cowen, Arnold R.; Bruijns, Tom J. C.

    1999-05-01

    We are currently in an era of active development of the digital X-ray imaging detectors that will serve the radiological communities in the new millennium. The rigorous comparative physical evaluations of such devices are therefore becoming increasingly important from both the technical and clinical perspectives. The authors have been actively involved in the evaluation of a clinical demonstration version of a flat-panel dynamic digital X-ray image detector (or FDXD). Results of objective physical evaluation of this device have been presented elsewhere at this conference. The imaging performance of FDXD under radiographic exposure conditions have been previously reported, and in this paper a psychophysical evaluation of the FDXD detector operating under continuous fluoroscopic conditions is presented. The evaluation technique employed was the threshold contrast detail detectability (TCDD) technique, which enables image quality to be measured on devices operating in the clinical environment. This approach addresses image quality in the context of both the image acquisition and display processes, and uses human observers to measure performance. The Leeds test objects TO[10] and TO[10+] were used to obtain comparative measurements of performance on the FDXD and two digital spot fluorography (DSF) systems, one utilizing a Plumbicon camera and the other a state of the art CCD camera. Measurements were taken at a range of detector entrance exposure rates, namely 6, 12, 25 and 50 (mu) R/s. In order to facilitate comparisons between the systems, all fluoroscopic image processing such as noise reduction algorithms, were disabled during the experiments. At the highest dose rate FDXD significantly outperformed the DSF comparison systems in the TCDD comparisons. At 25 and 12 (mu) R/s all three-systems performed in an equivalent manner and at the lowest exposure rate FDXD was inferior to the two DSF systems. At standard fluoroscopic exposures, FDXD performed in an equivalent manner to the DSF systems for the TCDD comparisons. This would suggest that FDXD would therefore perform adequately in a clinical fluoroscopic environment and our initial clinical experiences support this. Noise reduction processing of the fluoroscopic data acquired on FDXD was also found to further improve TCDD performance for FDXD. FDXD therefore combines acceptable fluoroscopic performance with excellent radiographic (snap shot) imaging fidelity, allowing the possibility of a universal x-ray detector to be developed, based on FDXD's technology. It is also envisaged that fluoroscopic performance will be improved by the development of digital image enhancement techniques specifically tailored to the characteristics of the FDXD detector.

  5. Evaluation of a national programme to reduce inappropriate use of antibiotics for upper respiratory tract infections: effects on consumer awareness, beliefs, attitudes and behaviour in Australia.

    PubMed

    Wutzke, Sonia E; Artist, Margaret A; Kehoe, Linda A; Fletcher, Miriam; Mackson, Judith M; Weekes, Lynn M

    2007-03-01

    The over-use of antibiotics, in particular, inappropriate use to treat upper respiratory tract infections (URTIs), is a global public health concern. In an attempt to reduce inappropriate use of antibiotics for URTIs, and, in particular, to modify patient misconceptions about the effectiveness of antibiotics for URTIs, Australia's National Prescribing Service Ltd (NPS) has undertaken a comprehensive, multistrategic programme for health professionals and the community. Targeted strategies for the community, via the NPS common colds community campaign, commenced in 2000 and have been repeated annually during the winter months. Community strategies were closely integrated, using the same tagline, key messages and visual images, and were delivered in numerous settings including general practice, community pharmacy, child-care centres and community groups. Strategies included written information via newsletters and brochures, mass media activity using billboards, television, radio and magazines and small grants to promote local community education. The evaluation used multiple methods and data sources to measure process, impact and outcomes. Consistent with intervention messages, the integrated nationwide prescriber and consumer programme is associated with modest but consistent positive changes in consumer awareness, beliefs, attitudes and behaviour to the appropriate use of antibiotics for URTIs. These positive changes among the community are corroborated by a national decline in total antibiotic prescriptions dispensed in the community (from 23.08 million prescriptions in 1998-99 to 21.44 million in 2001-02) and, specifically, by a decline among the nine antibiotics commonly used for URTI such that by 2003 nationally 216,000 fewer prescriptions for URTI are written each year by general practitioners.

  6. Imaging the Population Dynamics of Bacterial Communities in the Zebrafish Gut

    NASA Astrophysics Data System (ADS)

    Jemielita, Matthew; Taormina, Michael; Burns, Adam; Zac Stephens, W.; Hampton, Jennifer; Guillemin, Karen; Parthasarathy, Raghuveer

    2013-03-01

    The vertebrate gut is home to a diverse microbial ecosystem whose composition has a strong influence on the development and health of the host organism. While researchers are increasingly able to identify the constituent members of the microbiome, very little is known about the spatial and temporal dynamics of commensal microbial communities, including the mechanisms by which communities nucleate, grow, and interact. We address these issues using a model organism: the larval zebrafish (Danio rerio) prepared microbe-free and inoculated with controlled compositions of fluorophore-expressing bacteria. Live imaging with light sheet fluorescence microscopy enables visualization of individual bacterial cells as well as growing colonies over the entire volume of the gut over periods up to 24 hours. We analyze the structure and dynamics of imaged bacterial communities, uncovering correlations between population size, growth rates, and the timing of inoculations that suggest the existence of active changes in the host environment induced by early bacterial exposure. Our data provide the first visualizations of gut microbiota development over an extended period of time in a vertebrate.

  7. MO-E-BRD-01: Adapt-A-Thon - Texas Hold’em Invitational

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kessler, M; Brock, K; Pouliot, J

    2014-06-15

    Software tools for image-based adaptive radiotherapy such as deformable image registration, contour propagation and dose mapping have progressed beyond the research setting and are now commercial products available as part of both treatment planning systems and stand-alone applications. These software tools are used together to create clinical workflows to detect, track and evaluate changes in the patient and to accumulate dose. Deviations uncovered in this process are used to guide decisions about replanning/adaptation with the goal of keeping the delivery of prescribed dose “on target” throughout the entire course of radiotherapy. Since the output from one step of the adaptivemore » process is used as an input for another, it is essential to understand and document the uncertainty associated with each of the step and how these uncertainties are propagated. This in turn requires an understanding how the underlying tools work. Unfortunately, important details about the algorithms used to implement these tools are scarce or incomplete, too often for competitive reasons. This is in contrast to the situation involving other basic treatment planning algorithms such as dose calculations, where the medical physics community essentially requires vendors to provide physically important details about their underlying theory and clinical implementation. Vendors should adopt this same level of information sharing when it comes to the tools and techniques for image guided adaptive radiotherapy. The goal of this session is to start this process by inviting vendors and medical physicists to discuss and demonstrate the available tools and describe how they are intended to be used in clinical practice. The format of the session will involve a combination of formal presentations, interactive demonstrations, audience participation and some friendly “Texas style” competition. Learning Objectives: Understand the components of the image-based adaptive radiotherapy process. Understand the how these components are implemented in various commercial systems. Understand the different use cases and workflows currently supported these tools.« less

  8. Body Image Satisfaction among Blacks

    ERIC Educational Resources Information Center

    Gustat, Jeanette; Carton, Thomas W.; Shahien, Amir A.; Andersen, Lori

    2017-01-01

    Satisfaction with body image is a factor related to health outcomes. The purpose of this study is to examine the relationship between body image satisfaction and body size perception in an urban, Black community sample in New Orleans, Louisiana. Only 42.2% of respondents were satisfied with their body image and 44.1% correctly perceived their body…

  9. Perspective: Innocence and due diligence: managing unfounded allegations of scientific misconduct.

    PubMed

    Goldenring, James R

    2010-03-01

    While the incidence of fraud in science is well documented, issues related to the establishment of innocence in cases of fallacious allegations remain unaddressed. In this article, the author uses his own experience to examine issues that arise when investigators are falsely accused of scientific fraud. Investigators must understand the processes in place to protect themselves against false accusations. The present system takes a position of guilty until proven innocent, a concept that is antithetical to American principles of jurisprudence. Yet this stance is acceptable as a requirement for membership in the scientific community, more reflective of the rules within a guild organization. The necessity for proof of innocence by members of the scientific community carries obligations that transcend normal legal assumptions. Scientists must safeguard their reputations by organizing and maintaining all original image files and data relevant to publications and grant proposals. Investigators must be able to provide clear documentation rapidly whenever concerns are raised during the review process. Moreover, peer-reviewed journals must be diligent not only in the identification of fraud but also in providing rapid due process for adjudication of allegations. The success of the scientific guild rules of conduct lies in the practice of due diligence by both scientists and journal editors in questions of scientific misconduct.

  10. Applying Strategic Visualization(Registered Trademark) to Lunar and Planetary Mission Design

    NASA Technical Reports Server (NTRS)

    Frassanito, John R.; Cooke, D. R.

    2002-01-01

    NASA teams, such as the NASA Exploration Team (NEXT), utilize advanced computational visualization processes to develop mission designs and architectures for lunar and planetary missions. One such process, Strategic Visualization (trademark), is a tool used extensively to help mission designers visualize various design alternatives and present them to other participants of their team. The participants, which may include NASA, industry, and the academic community, are distributed within a virtual network. Consequently, computer animation and other digital techniques provide an efficient means to communicate top-level technical information among team members. Today,Strategic Visualization(trademark) is used extensively both in the mission design process within the technical community, and to communicate the value of space exploration to the general public. Movies and digital images have been generated and shown on nationally broadcast television and the Internet, as well as in magazines and digital media. In our presentation will show excerpts of a computer-generated animation depicting the reference Earth/Moon L1 Libration Point Gateway architecture. The Gateway serves as a staging corridor for human expeditions to the lunar poles and other surface locations. Also shown are crew transfer systems and current reference lunar excursion vehicles as well as the Human and robotic construction of an inflatable telescope array for deployment to the Sun/Earth Libration Point.

  11. Marketing and Community Mental Health Centers.

    ERIC Educational Resources Information Center

    Ferniany, Isaac W.; Garove, William E.

    1983-01-01

    Suggests that a marketing approach can be applied to community mental health centers. Marketing is a management orientation of providing services for, not to, patients in a systematic manner, which can help mental health centers improve services, strengthen community image, achieve financial independence and aid in staff recruitment. (Author)

  12. [Health-related images and concepts among adolescents living in rural areas of Brazil].

    PubMed

    Costa, Anny Giselly Milhome da; Vieira, Neiva Francenely Cunha; Gubert, Fabiane do Amaral; Ferreira, Adriana Gomes Nogueira; Scopacasa, Lígia Fernandes; Pinheiro, Patrícia Neyva da Costa

    2013-08-01

    The objective of this study was to describe health-related images and concepts among adolescents living in rural areas of Brazil, using photography. This was a qualitative community-based participatory study that used the photovoice method for data collection with groups of teenagers. Over a four-month period, 26 participants identified health problems in the rural community, took photographs, and reflected critically on the local reality. The adolescents presented pictures and stories that they organized into research themes and categories, representing inadequate living conditions for appropriate socioeconomic and cultural development and limiting the opportunities for change in this community. The study proved to be a positive health education strategy, involving young people in the community's health and maximizing the voice of teenagers as protagonists in their own history.

  13. The Utility of the Extended Images in Ambient Seismic Wavefield Migration

    NASA Astrophysics Data System (ADS)

    Girard, A. J.; Shragge, J. C.

    2015-12-01

    Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.

  14. Face Recognition with the Karhunen-Loeve Transform

    DTIC Science & Technology

    1991-12-01

    anthropometry community? 1-2 Methodology As part of this thesis, face recognition software is developed on the Silicon Graphics 4D Personal Iris...the anthropometry community. Standards The most important performance criteria is classification accuracy which is the per- centage of correct...demonstrated by Tarr (24). Reconstructed Output Image yl y2 ... y64 16 hidden layer units xl x2 ... x64 Input 64 by 64 pixel Image Figure 2.6. After the

  15. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    PubMed

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing can significantly impact image quality when settings are left near default values.

  16. STS-68 radar image: Glasgow, Missouri

    NASA Image and Video Library

    1994-10-07

    STS068-S-055 (7 October 1994) --- This is a false-color L-Band image of an area near Glasgow, Missouri, centered at about 39.2 degrees north latitude and 92.8 degrees west longitude. The image was acquired using the L-Band radar channel (horizontally transmitted and received and horizontally transmitted and vertically received) polarization's combined. The data were acquired by the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Space Shuttle Endeavour on orbit 50 on October 3, 1994. The area shown is approximately 37 by 25 kilometers (23 by 16 miles). The radar data, coupled with pre-flood aerial photography and satellite data and post-flood topographic and field data, are being used to evaluate changes associated with levee breaks in land forms, where deposits formed during the widespread flooding in 1993 along the Missouri and Mississippi Rivers. The distinct radar scattering properties of farmland, sand fields and scoured areas will be used to inventory flood plains along the Missouri River and determine the processes by which these areas return to preflood conditions. The image shows one such levee break near Glasgow, Missouri. In the upper center of the radar image, below the bend of the river, is a region covered by several meters of sand, shown as dark regions. West (left) of the dark areas, a gap in the levee tree canopy shows the area where the levee failed. Radar data such as these can help scientists more accurately assess the potential for future flooding in this region and how that might impact surrounding communities. Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses the three microwave wavelengths: the L-Band (24 centimeters), C-Band (6 centimeters) and X-Band (3 centimeters). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory (JPL). X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR. (P-44734)

  17. LSST Survey Data: Models for EPO Interaction

    NASA Astrophysics Data System (ADS)

    Olsen, J. K.; Borne, K. D.

    2007-12-01

    The potential for education and public outreach with the Large Synoptic Survey Telescope is as far reaching as the telescope itself. LSST data will be available to the public, giving anyone with a web browser a movie-like window on the Universe. The LSST project is unique in designing its data management and data access systems with the public and community users in mind. The enormous volume of data to be generated by LSST is staggering: 30 Terabytes per night, 10 Petabytes per year. The final database of extracted science parameters from the images will also be enormous -- 50-100 Petabytes -- a rich gold mine for data mining and scientific discovery potential. LSST will also generate 100,000 astronomical alerts per night, for 10 years. The LSST EPO team is examining models for EPO interaction with the survey data, particularly in how the community (amateurs, teachers, students, and general public) can participate in the discovery process. We will outline some of our models of community interaction for inquiry-based science using the LSST survey data, and we invite discussion on these topics.

  18. Label-free in situ SERS imaging of biofilms.

    PubMed

    Ivleva, Natalia P; Wagner, Michael; Szkola, Agathe; Horn, Harald; Niessner, Reinhard; Haisch, Christoph

    2010-08-12

    Surface-enhanced Raman scattering (SERS) is a promising technique for the chemical characterization of biological systems. It yields highly informative spectra, can be applied directly in aqueous environment, and has high sensitivity in comparison with normal Raman spectroscopy. Moreover, SERS imaging can provide chemical information with spatial resolution in the micrometer range (chemical imaging). In this paper, we report for the first time on the application of SERS for in situ, label-free imaging of biofilms and demonstrate the suitability of this technique for the characterization of the complex biomatrix. Biofilms, being communities of microorganisms embedded in a matrix of extracellular polymeric substances (EPS), represent the predominant mode of microbial life. Knowledge of the chemical composition and the structure of the biofilm matrix is important in different fields, e.g., medicine, biology, and industrial processes. We used colloidal silver nanoparticles for the in situ SERS analysis. Good SERS measurement reproducibility, along with a significant enhancement of Raman signals by SERS (>10(4)) and highly informative SERS signature, enables rapid SERS imaging (1 s for a single spectrum) of the biofilm matrix. Altogether, this work illustrates the potential of SERS for biofilm analysis, including the detection of different constituents and the determination of their distribution in a biofilm even at low biomass concentration.

  19. NASA PDS IMG: Accessing Your Planetary Image Data

    NASA Astrophysics Data System (ADS)

    Padams, J.; Grimes, K.; Hollins, G.; Lavoie, S.; Stanboli, A.; Wagstaff, K.

    2018-04-01

    The Planetary Data System Cartography and Imaging Sciences Node provides a number of tools and services to integrate the 700+ TB of image data so information can be correlated across missions, instruments, and data sets and easily accessed by the science community.

  20. Space Radar Image of Mount Pinatubo Volcano, Philippines

    NASA Technical Reports Server (NTRS)

    1994-01-01

    These are color composite radar images showing the area around Mount Pinatubo in the Philippines. The images were acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on April 14, 1994 (left image) and October 5,1994 (right image). The images are centered at about 15 degrees north latitude and 120.5 degrees east longitude. Both images were obtained with the same viewing geometry. The color composites were made by displaying the L-band (horizontally transmitted and received) in red; the L-band (horizontally transmitted and vertically received) in green; and the C-band (horizontally transmitted and vertically received) in blue. The area shown is approximately 40 kilometers by 65 kilometers (25 miles by 40 miles). The main volcanic crater on Mount Pinatubo produced by the June 1991 eruptions and the steep slopes on the upper flanks of the volcano are easily seen in these images. Red on the high slopes shows the distribution of the ash deposited during the 1991 eruption, which appears red because of the low cross-polarized radar returns at C and L bands. The dark drainages radiating away from the summit are the smooth mudflows, which even three years after the eruptions continue to flood the river valleys after heavy rain. Comparing the two images shows that significant changes have occurred in the intervening five months along the Pasig-Potrero rivers (the dark area in the lower right of the images). Mudflows, called 'lahars,' that occurred during the 1994 monsoon season filled the river valleys, allowing the lahars to spread over the surrounding countryside. Three weeks before the second image was obtained, devastating lahars more than doubled the area affected in the Pasig-Potrero rivers, which is clearly visible as the increase in dark area on the lower right of the images. Migration of deposition to the east (right) has affected many communities. Newly affected areas included the community of Bacolor, Pampanga, where thousands of homes were buried in meters of hot mud and rock as 80,000 people fled the lahar-stricken area. Scientists are closely monitoring the westward migration ( toward the left in this image) of the lahars as the Pasig-Potrero rivers seek to join with the Porac River, an area that has not seen laharic activity since the eruption. This could be devastating because the Pasig-Potrero rivers might be permanently redirected to lower elevations along the Porac River where communities are located. Ground saturation with water during the rainy season reveals inactive channels that were dry in the April image. A small lake has turned into a pond in the lower reaches of the Potrero River because the channels are full of lahar deposits and the surface runoff has no where to flow. Changes in the degree of erosion in ash and pumice deposits from the 1991 eruption can also be seen in the channels that deliver the mudflow material to the Pasig-Potrero rivers. The 1991 Mount Pinatubo eruption is well known for its near-global effects on the atmosphere and short-term climate due to the large amount of sulfur dioxide that was injected into the upper atmosphere. Locally, however, the effects will most likely continue to impact surrounding areas for as long as the next 10 to 15 years. Mudflows, quite certainly, will continue to pose severe hazards to adjacent areas. Radar observations like those obtained by SIR-C/X-SAR will play a key role in monitoring these changes because of the radar's ability to see in daylight or darkness and even in the worst weather conditions. Radar imaging will be particularly useful, for example, during the monsoon season, when the lahars form. Frequent imaging of these lahar fields will allow scientists to better predict when they are likely to begin flowing again and which communities might be at risk. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  1. Creating & using specimen images for collection documentation, research, teaching and outreach

    NASA Astrophysics Data System (ADS)

    Demouthe, J. F.

    2012-12-01

    In this age of digital media, there are many opportunities for use of good images of specimens. On-line resources such as institutional web sites and global sites such as PaleoNet and the Paleobiology Database provide venues for collection information and images. Pictures can also be made available to the general public through popular media sites such as Flickr and Facebook, where they can be retrieved and used by teachers, students, and the general public. The number of requests for specimen loans can be drastically reduced by offering the scientific community access to data and specimen images using the internet. This is an important consideration in these days of limited support budgets, since it reduces the amount of staff time necessary for giving researchers and educators access to collections. It also saves wear and tear on the specimens themselves. Many institutions now limit or refuse to send specimens out of their own countries because of the risks involved in going through security and customs. The internet can bridge political boundaries, allowing everyone equal access to collections. In order to develop photographic documentation of a collection, thoughtful preparation will make the process easier and more efficient. Acquire the necessary equipment, establish standards for images, and develop a simple workflow design. Manage images in the camera, and produce the best possible results, rather than relying on time-consuming editing after the fact. It is extremely important that the images of each specimen be of the highest quality and resolution. Poor quality, low resolution photos are not good for anything, and will often have to be retaken when another need arises. Repeating the photography process involves more handling of specimens and more staff time. Once good photos exist, smaller versions can be created for use on the web. The originals can be archived and used for publication and other purposes.

  2. Oxygen, politics and the American Revolution (with a note on the bicentennial of phlogiston).

    PubMed Central

    Harken, A H

    1976-01-01

    In this bicentennial year, it seems appropriate that each discipline examine its heritage. Two centuries ago, Joseph Priestley isolated "dephlogisticated air." International diplomacy surrounding the American and early French Revolutions provided an opportunity for Benjamin Franklin and Antoine Lavoisier to witness Priestley's work. The combined efforts of these analytical minds converted an illogical phlogiston myth into a practical and therapeutic principle. Lavoisier subsequently coined the word "oxy-gène." In the ensuing centuries, this substance has gained a central role in rational surgical therapy. The interaction between these scientists, their ultimate fate and their relationship to their communities appear to provide lessons relevant to present day biomedical research funding and the peer review process. The surgical community can be justifiably proud of its past. By reflecting on these events, we may perhaps concentrate the benefits without condemning ourselves to the repetition of previous error. Images Fig. 2. PMID:791165

  3. Grief in the initial adjustment process to the continuing care retirement community.

    PubMed

    Ayalon, Liat; Green, Varda

    2012-12-01

    This paper examined the transition to continuing care retirement communities (CCRCs) within the framework of anticipatory and disenfranchised grief. Qualitative interviews with 29 residents and 19 adult children were conducted. Three major thematic categories emerged from the data. The first theme reflected ambivalence, dialectics or uncertainty about the CCRC as manifested by the various names assigned to it by respondents. The second theme reflected the acknowledgement of present and anticipatory losses and grief in response to the move. The final theme reflected respondents' disenfranchisement of their grief and loss and their view of the transition in a positive light. In their early adjustment period, residents and adult children are ambivalent about the transition, but often refrain from acknowledging their losses openly because of the image of the CCRC as a status symbol. Open acknowledgement of losses associated with the transition might be beneficial. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Invited Article: Mask-modulated lensless imaging with multi-angle illuminations

    NASA Astrophysics Data System (ADS)

    Zhang, Zibang; Zhou, You; Jiang, Shaowei; Guo, Kaikai; Hoshino, Kazunori; Zhong, Jingang; Suo, Jinli; Dai, Qionghai; Zheng, Guoan

    2018-06-01

    The use of multiple diverse measurements can make lensless phase retrieval more robust. Conventional diversity functions include aperture diversity, wavelength diversity, translational diversity, and defocus diversity. Here we discuss a lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions. In this scheme, we place a binary mask between the sample and the detector for imposing support constraints for the phase retrieval process. This support constraint enforces the light field to be zero at certain locations and is similar to the aperture constraint in Fourier ptychographic microscopy. We use a self-calibration algorithm to correct the misalignment of the binary mask. The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the reconstruction quality using mean square error and structural similarity index. The scheme is then experimentally tested by recovering images of a resolution target and biological samples. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can also be combined with other diversity functions for better constraining the phase retrieval solution space. We provide the open-source implementation code for the broad research community.

  5. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.

    PubMed

    Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-02-23

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.

  6. The Function Biomedical Informatics Research Network Data Repository

    PubMed Central

    Keator, David B.; van Erp, Theo G.M.; Turner, Jessica A.; Glover, Gary H.; Mueller, Bryon A.; Liu, Thomas T.; Voyvodic, James T.; Rasmussen, Jerod; Calhoun, Vince D.; Lee, Hyo Jong; Toga, Arthur W.; McEwen, Sarah; Ford, Judith M.; Mathalon, Daniel H.; Diaz, Michele; O’Leary, Daniel S.; Bockholt, H. Jeremy; Gadde, Syam; Preda, Adrian; Wible, Cynthia G.; Stern, Hal S.; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G.

    2015-01-01

    The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical datasets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 dataset consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 Tesla scanners. The FBIRN Phase 2 and Phase 3 datasets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN’s multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. PMID:26364863

  7. Using hyperspectral remote sensing for land cover classification

    NASA Astrophysics Data System (ADS)

    Zhang, Wendy W.; Sriharan, Shobha

    2005-01-01

    This project used hyperspectral data set to classify land cover using remote sensing techniques. Many different earth-sensing satellites, with diverse sensors mounted on sophisticated platforms, are currently in earth orbit. These sensors are designed to cover a wide range of the electromagnetic spectrum and are generating enormous amounts of data that must be processed, stored, and made available to the user community. The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) collects data in 224 bands that are approximately 9.6 nm wide in contiguous bands between 0.40 and 2.45 mm. Hyperspectral sensors acquire images in many, very narrow, contiguous spectral bands throughout the visible, near-IR, and thermal IR portions of the spectrum. The unsupervised image classification procedure automatically categorizes the pixels in an image into land cover classes or themes. Experiments on using hyperspectral remote sensing for land cover classification were conducted during the 2003 and 2004 NASA Summer Faculty Fellowship Program at Stennis Space Center. Research Systems Inc.'s (RSI) ENVI software package was used in this application framework. In this application, emphasis was placed on: (1) Spectrally oriented classification procedures for land cover mapping, particularly, the supervised surface classification using AVIRIS data; and (2) Identifying data endmembers.

  8. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration

    PubMed Central

    Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-01-01

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community. PMID:28287588

  9. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  10. Synthesis of Multispectral Bands from Hyperspectral Data: Validation Based on Images Acquired by AVIRIS, Hyperion, ALI, and ETM+

    NASA Technical Reports Server (NTRS)

    Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki

    2001-01-01

    Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.

  11. The essential role of amateur astronomers in enabling the Juno mission interaction with the public

    NASA Astrophysics Data System (ADS)

    Orton, G. S.; Hansen, C. J.; Tabataba-Vakili, F.; Bolton, S.; Jensen, E.

    2017-09-01

    JunoCam was added to the payload of the Juno mission largely to function in the role of education and public outreach. For the first time, the public is able to engage in the discussion and choice of targets for a major NASA mission. The discussion about which features to image is enabled by a bi-weekly updated map of Jupiter's cloud system, thereby engaging the community of amateur astronomers as a vast network of co-investigators, whose products stimulate conversation and global public awareness of Jupiter and Juno's investigative role. The contributed images provide the focus for ongoing discussion about various planetary features over a long time frame. Approximately two weeks before Juno's closest approach to Jupiter on each orbit, the atmospheric features that have been under discussion and are available to JunoCam on that perijove are nominated for voting, and the public at large votes on what to image at low latitudes, with the camera always taking images of the poles in each perijove. Public voting was tested for the first time on three regions for PJ3 and has continued since then for nearly all non-polar images. The results of public processing of JunoCam images range all the way from artistic renditions up to professional-equivalent analysis. All aspects of this effort are available on: https://www.missionjuno.swri.edu/junocam/.

  12. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format.

    PubMed

    Ahmed, Zeeshan; Dandekar, Thomas

    2015-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool 'Mining Scientific Literature (MSL)', which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system's output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format.

  13. CATE 2016 Indonesia: Image Calibration, Intensity Calibration, and Drift Scan

    NASA Astrophysics Data System (ADS)

    Hare, H. S.; Kovac, S. A.; Jensen, L.; McKay, M. A.; Bosh, R.; Watson, Z.; Mitchell, A. M.; Penn, M. J.

    2016-12-01

    The citizen Continental America Telescopic Eclipse (CATE) experiment aims to provide equipment for 60 sites across the path of totality for the United States August 21st, 2017 total solar eclipse. The opportunity to gather ninety minutes of continuous images of the solar corona is unmatched by any other previous eclipse event. In March of 2016, 5 teams were sent to Indonesia to test CATE equipment and procedures on the March 9th, 2016 total solar eclipse. Also, a goal of the trip was practice and gathering data to use in testing data reduction methods. Of the five teams, four collected data. While in Indonesia, each group participated in community outreach in the location of their site. The 2016 eclipse allowed CATE to test the calibration techniques for the 2017 eclipse. Calibration dark current and flat field images were collected to remove variation across the cameras. Drift scan observations provided information to rotationally align the images from each site. These image's intensity values allowed for intensity calibration for each of the sites. A GPS at each site corrected for major computer errors in time measurement of images. Further refinement of these processes is required before the 2017 eclipse. This work was made possible through the NSO Training for the 2017 Citizen CATE Experiment funded by NASA (NASA NNX16AB92A).

  14. Regulatory Aspects of Optical Methods and Exogenous Targets for Cancer Detection

    PubMed Central

    Tummers, Willemieke S.; Warram, Jason M.; Tipirneni, Kiranya E.; Fengler, John; Jacobs, Paula; Shankar, Lalitha; Henderson, Lori; Ballard, Betsy; Pogue, Brian W.; Weichert, Jamey P.; Bouvet, Michael; Sorger, Jonathan; Contag, Christopher H.; Frangioni, John V.; Tweedle, Michael F.; Basilion, James P.; Gambhir, Sanjiv S.; Rosenthal, Eben L.

    2017-01-01

    Considerable advances in cancer-specific optical imaging have improved the precision of tumor resection. In comparison to traditional imaging modalities, this technology is unique in its ability to provide real-time feedback to the operating surgeon. Given the significant clinical implications of optical imaging, there is an urgent need to standardize surgical navigation tools and contrast agents to facilitate swift regulatory approval. Because fluorescence-enhanced surgery requires a combination of both device and drug, each may be developed in conjunction, or separately, which are important considerations in the approval process. This report is the result of a one-day meeting held on May 4, 2016 with officials from the National Cancer Institute, the FDA, members of the American Society of Image-Guided Surgery, and members of the World Molecular Imaging Society, which discussed consensus methods for FDA-directed human testing and approval of investigational optical imaging devices as well as contrast agents for surgical applications. The goal of this workshop was to discuss FDA approval requirements and the expectations for approval of these novel drugs and devices, packaged separately or in combination, within the context of optical surgical navigation. In addition, the workshop acted to provide clarity to the research community on data collection and trial design. Reported here are the specific discussion items and recommendations from this critical and timely meeting. PMID:28428283

  15. Self-Concept in Childhood: The Role of Body Image and Sport Practice

    PubMed Central

    Mendo-Lázaro, Santiago; Polo-del-Río, María I.; Amado-Alonso, Diana; Iglesias-Gallego, Damián; León-del-Barco, Benito

    2017-01-01

    The purpose of this study was to explore the differences in satisfaction with body image depending on whether the subject practices organized sport or not, as well as the gender of the children. In addition, the study aims to examine the role of body image and the practice of organized sport on the process of building the academic, social, emotional, family and physical dimensions of self-concept in childhood. To do so, a sample of 944 pupils was used. These children were attending primary school in different centers of the Autonomous Community of Extremadura (Spain) and were between 9 and 12 years of age. The main results of the study show that three out of every four children participating in this study were not satisfied with their figure and one out of every five was very dissatisfied. The satisfaction or dissatisfaction with the figure was similar in boys and girls, although it could be appreciated that the ideal body image is partly conditioned by gender stereotypes. The children most satisfied with their body image had a greater academic and physical self-concept. The children that practiced organized sports had a greater physical and emotional self-concept. The children most dissatisfied with their body image and practiced organized sports had a lower family self-concept. All these findings are discussed with reference to previous research literature. PMID:28596750

  16. Molecular aspects of magnetic resonance imaging and spectroscopy.

    PubMed

    Boesch, C

    1999-01-01

    Magnetic resonance imaging (MRI) is a well known diagnostic tool in radiology that produces unsurpassed images of the human body, in particular of soft tissue. However, the medical community is often not aware that MRI is an important yet limited segment of magnetic resonance (MR) or nuclear magnetic resonance (NMR) as this method is called in basic science. The tremendous morphological information of MR images sometimes conceal the fact that MR signals in general contain much more information, especially on processes on the molecular level. NMR is successfully used in physics, chemistry, and biology to explore and characterize chemical reactions, molecular conformations, biochemical pathways, solid state material, and many other applications that elucidate invisible characteristics of matter and tissue. In medical applications, knowledge of the molecular background of MRI and in particular MR spectroscopy (MRS) is an inevitable basis to understand molecular phenomenon leading to macroscopic effects visible in diagnostic images or spectra. This review shall provide the necessary background to comprehend molecular aspects of magnetic resonance applications in medicine. An introduction into the physical basics aims at an understanding of some of the molecular mechanisms without extended mathematical treatment. The MR typical terminology is explained such that reading of original MR publications could be facilitated for non-MR experts. Applications in MRI and MRS are intended to illustrate the consequences of molecular effects on images and spectra.

  17. Self-Concept in Childhood: The Role of Body Image and Sport Practice.

    PubMed

    Mendo-Lázaro, Santiago; Polo-Del-Río, María I; Amado-Alonso, Diana; Iglesias-Gallego, Damián; León-Del-Barco, Benito

    2017-01-01

    The purpose of this study was to explore the differences in satisfaction with body image depending on whether the subject practices organized sport or not, as well as the gender of the children. In addition, the study aims to examine the role of body image and the practice of organized sport on the process of building the academic, social, emotional, family and physical dimensions of self-concept in childhood. To do so, a sample of 944 pupils was used. These children were attending primary school in different centers of the Autonomous Community of Extremadura (Spain) and were between 9 and 12 years of age. The main results of the study show that three out of every four children participating in this study were not satisfied with their figure and one out of every five was very dissatisfied. The satisfaction or dissatisfaction with the figure was similar in boys and girls, although it could be appreciated that the ideal body image is partly conditioned by gender stereotypes. The children most satisfied with their body image had a greater academic and physical self-concept. The children that practiced organized sports had a greater physical and emotional self-concept. The children most dissatisfied with their body image and practiced organized sports had a lower family self-concept. All these findings are discussed with reference to previous research literature.

  18. The Imaging and Medical Beam Line at the Australian Synchrotron

    NASA Astrophysics Data System (ADS)

    Hausermann, Daniel; Hall, Chris; Maksimenko, Anton; Campbell, Colin

    2010-07-01

    As a result of the enthusiastic support from the Australian biomedical, medical and clinical communities, the Australian Synchrotron is constructing a world-class facility for medical research, the `Imaging and Medical Beamline'. The IMBL began phased commissioning in late 2008 and is scheduled to commence the first clinical research programs with patients in 2011. It will provide unrivalled x-ray facilities for imaging and radiotherapy for a wide range of research applications in diseases, treatments and understanding of physiological processes. The main clinical research drivers are currently high resolution and sensitivity cardiac and breast imaging, cell tracking applied to regenerative and stem cell medicine and cancer therapies. The beam line has a maximum source to sample distance of 136 m and will deliver a 60 cm by 4 cm x-ray beam1—monochromatic and white—to a three storey satellite building fully equipped for pre-clinical and clinical research. Currently operating with a 1.4 Tesla multi-pole wiggler, it will upgrade to a 4.2 Tesla device which requires the ability to handle up to 21 kW of x-ray power at any point along the beam line. The applications envisaged for this facility include imaging thick objects encompassing materials, humans and animals. Imaging can be performed in the range 15-150 keV. Radiotherapy research typically requires energies between 30 and 120 keV, for both monochromatic and broad beam.

  19. Eco-geophysical imaging of watershed-scale soil patterns links with plant community spatial patterns

    USDA-ARS?s Scientific Manuscript database

    The extent to which soil resource availability, nutrients or 1 moisture, control the structure, function and diversity of plant communities has aroused considerable interest in the past decade, and remains topical in light of global change. Numerous plant communities are controlled either by water o...

  20. Chattanooga State Technical Community College Marketing Plan 1981-82.

    ERIC Educational Resources Information Center

    Hoppe, Sherry; Haddock, David

    Chattanooga State Technical Community College's (CSTCC's) marketing plan is presented in six parallel sections. The first of these deals with building the overall image of the college, increasing community awareness, and disseminating general information. The other five sections focus on marketing the following college programs and services:…

  1. Painting the Emerging Image: Portraits of Family-Informed Scholar Activism

    ERIC Educational Resources Information Center

    Maxis, Sophie; Janson, Christopher; Jamison, Rudy; Whaley, Keon

    2017-01-01

    In this article, we, two professors and two students of educational leadership, embrace the pedagogies of community engagement through the ecologies of self, organization, and community. In this article, we explore the development of community-engaged scholars and practitioners through two distinct lenses: faculty who facilitate engaged learning…

  2. A Bibliography on Police and Community Relations.

    ERIC Educational Resources Information Center

    Miller, Martin G., Comp.

    A reflection of concerns of social scientists and of those involved in law enforcement, this extensive bibliography on police and community relations covers general material (including historical reviews); problems and approaches in police administration; the police image and community relations; the impact of the civil rights movement and civil…

  3. C-IMAGE Teachers at Sea Maiden Voyages: Promoting Authentic Scientific Research in the Classroom

    NASA Astrophysics Data System (ADS)

    Hine, A. C.; Greely, T.; Lodge, A.

    2012-12-01

    The Center for Integrated Modeling & Analysis of Gulf Ecosystems (C-IMAGE) is one of eight consortia participating in the BP/Gulf of Mexico Research Initiative. C-IMAGE is a comprehensive research consortium of 13 national and international universities tasked with evaluating the environmental impacts of the 2010 Deepwater Horizon Oil Spill (DWH) on coastal ecosystems, the water column and the sea floor. The associated C-IMAGE research cruises provide a unique opportunity for Florida's K12 science educators to participate in the data collection and collaboration process alongside marine scientists as a member of the scientific crew. The mission of the C-IMAGE cruises is to help to answer several fundamental questions about the DWH event and subsequent impacts on the plankton population, reef and fish communities and the microbial communities. Deep sea sediment samples, plankton and fishes collected during these expeditions are the data sources. Sampling activities include the use of the SIPPER plankton sampler, multi-core sediment system and long line surveys to assess fish health. While at sea teachers participate in the at sea research and serve as the ship to shore communicator via social media (FB, Twitter, daily blogs) and LIVE video conferencing with formal and informal classrooms. Marine scientists, post-docs and graduate students participating in the C-IMAGE cruises collaborate with the teacher on board to communicate the science, technology and life at sea experiences to educational and general audiences. Upon return to shore, teachers will translate their At Sea learning experience to understandable inquiry-based lessons about the science and technology encompassing the northern Gulf of Mexico ecology, the DWH event and subsequent impacts. Lessons developed from the cruises will inform a future series of C-IMAGE Teacher Professional Developments during Phase 2 of Outreach activities. The results from three Gulf of Mexico expeditions (Aug-Nov) will be presented: related to teachers' working knowledge of research and sampling procedures as well as metrics for the potential value-added of social media as a mechanism for communicating research with formal and informal audiences. C-IMAGE teachers will engage in research with experts in biological and chemical modeling, marine resource assessment, sedimentary geochemistry and toxicology. This research is made possible by a grant from BP/The Gulf of Mexico Research Initiative. Contract #SA 12-10/GoMRI-007;

  4. Changes in assembly processes in soil bacterial communities following a wildfire disturbance.

    PubMed

    Ferrenberg, Scott; O'Neill, Sean P; Knelman, Joseph E; Todd, Bryan; Duggan, Sam; Bradley, Daniel; Robinson, Taylor; Schmidt, Steven K; Townsend, Alan R; Williams, Mark W; Cleveland, Cory C; Melbourne, Brett A; Jiang, Lin; Nemergut, Diana R

    2013-06-01

    Although recent work has shown that both deterministic and stochastic processes are important in structuring microbial communities, the factors that affect the relative contributions of niche and neutral processes are poorly understood. The macrobiological literature indicates that ecological disturbances can influence assembly processes. Thus, we sampled bacterial communities at 4 and 16 weeks following a wildfire and used null deviation analysis to examine the role that time since disturbance has in community assembly. Fire dramatically altered bacterial community structure and diversity as well as soil chemistry for both time-points. Community structure shifted between 4 and 16 weeks for both burned and unburned communities. Community assembly in burned sites 4 weeks after fire was significantly more stochastic than in unburned sites. After 16 weeks, however, burned communities were significantly less stochastic than unburned communities. Thus, we propose a three-phase model featuring shifts in the relative importance of niche and neutral processes as a function of time since disturbance. Because neutral processes are characterized by a decoupling between environmental parameters and community structure, we hypothesize that a better understanding of community assembly may be important in determining where and when detailed studies of community composition are valuable for predicting ecosystem function.

  5. Changes in assembly processes in soil bacterial communities following a wildfire disturbance

    PubMed Central

    Ferrenberg, Scott; O'Neill, Sean P; Knelman, Joseph E; Todd, Bryan; Duggan, Sam; Bradley, Daniel; Robinson, Taylor; Schmidt, Steven K; Townsend, Alan R; Williams, Mark W; Cleveland, Cory C; Melbourne, Brett A; Jiang, Lin; Nemergut, Diana R

    2013-01-01

    Although recent work has shown that both deterministic and stochastic processes are important in structuring microbial communities, the factors that affect the relative contributions of niche and neutral processes are poorly understood. The macrobiological literature indicates that ecological disturbances can influence assembly processes. Thus, we sampled bacterial communities at 4 and 16 weeks following a wildfire and used null deviation analysis to examine the role that time since disturbance has in community assembly. Fire dramatically altered bacterial community structure and diversity as well as soil chemistry for both time-points. Community structure shifted between 4 and 16 weeks for both burned and unburned communities. Community assembly in burned sites 4 weeks after fire was significantly more stochastic than in unburned sites. After 16 weeks, however, burned communities were significantly less stochastic than unburned communities. Thus, we propose a three-phase model featuring shifts in the relative importance of niche and neutral processes as a function of time since disturbance. Because neutral processes are characterized by a decoupling between environmental parameters and community structure, we hypothesize that a better understanding of community assembly may be important in determining where and when detailed studies of community composition are valuable for predicting ecosystem function. PMID:23407312

  6. Community response of zooplankton to oceanographic changes (2002-2012) in the central/southern upwelling system of Chile

    NASA Astrophysics Data System (ADS)

    Medellín-Mora, Johanna; Escribano, Ruben; Schneider, Wolfgang

    2016-03-01

    A 10-year time series (2002-2012) at Station 18 off central/southern Chile allowed us to study variations in zooplankton along with interannual variability and trends in oceanographic conditions. We used an automated analysis program (ZooImage) to assess changes in the mesozooplankton size structure and the composition of the taxa throughout the entire community. Oceanographic conditions changed over the decade: the water column became less stratified, more saline, and colder; the mixed layer deepened; and the oxygen minimum zone became shallower during the second half of the time series (2008-2012) in comparison with the first period (2002-2007). Both the size structure and composition of the zooplankton were significantly associated with oceanographic changes. Taxonomic and size diversity of the zooplankton community increased to the more recent period. For the second period, small sized copepods (<1 mm) decreased in abundance, being replaced by larger sized (>1.5 mm) and medium size copepods (1-1.5 mm), whereas euphausiids, decapod larvae, appendicularian and ostracods increased their abundance during the second period. These findings indicated that the zooplankton community structure in this eastern boundary ecosystem was strongly influenced by variability of the upwelling process. Thus, climate-induced forcing of upwelling trends can alter the zooplankton community in this highly productive region with potential consequences for the ecosystem food web.

  7. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  8. Automated processing pipeline for neonatal diffusion MRI in the developing Human Connectome Project.

    PubMed

    Bastiani, Matteo; Andersson, Jesper L R; Cordero-Grande, Lucilio; Murgasova, Maria; Hutter, Jana; Price, Anthony N; Makropoulos, Antonios; Fitzgibbon, Sean P; Hughes, Emer; Rueckert, Daniel; Victor, Suresh; Rutherford, Mary; Edwards, A David; Smith, Stephen M; Tournier, Jacques-Donald; Hajnal, Joseph V; Jbabdi, Saad; Sotiropoulos, Stamatios N

    2018-05-28

    The developing Human Connectome Project is set to create and make available to the scientific community a 4-dimensional map of functional and structural cerebral connectivity from 20 to 44 weeks post-menstrual age, to allow exploration of the genetic and environmental influences on brain development, and the relation between connectivity and neurocognitive function. A large set of multi-modal MRI data from fetuses and newborn infants is currently being acquired, along with genetic, clinical and developmental information. In this overview, we describe the neonatal diffusion MRI (dMRI) image processing pipeline and the structural connectivity aspect of the project. Neonatal dMRI data poses specific challenges, and standard analysis techniques used for adult data are not directly applicable. We have developed a processing pipeline that deals directly with neonatal-specific issues, such as severe motion and motion-related artefacts, small brain sizes, high brain water content and reduced anisotropy. This pipeline allows automated analysis of in-vivo dMRI data, probes tissue microstructure, reconstructs a number of major white matter tracts, and includes an automated quality control framework that identifies processing issues or inconsistencies. We here describe the pipeline and present an exemplar analysis of data from 140 infants imaged at 38-44 weeks post-menstrual age. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  9. The InSAR Scientific Computing Environment

    NASA Technical Reports Server (NTRS)

    Rosen, Paul A.; Gurrola, Eric; Sacco, Gian Franco; Zebker, Howard

    2012-01-01

    We have developed a flexible and extensible Interferometric SAR (InSAR) Scientific Computing Environment (ISCE) for geodetic image processing. ISCE was designed from the ground up as a geophysics community tool for generating stacks of interferograms that lend themselves to various forms of time-series analysis, with attention paid to accuracy, extensibility, and modularity. The framework is python-based, with code elements rigorously componentized by separating input/output operations from the processing engines. This allows greater flexibility and extensibility in the data models, and creates algorithmic code that is less susceptible to unnecessary modification when new data types and sensors are available. In addition, the components support provenance and checkpointing to facilitate reprocessing and algorithm exploration. The algorithms, based on legacy processing codes, have been adapted to assume a common reference track approach for all images acquired from nearby orbits, simplifying and systematizing the geometry for time-series analysis. The framework is designed to easily allow user contributions, and is distributed for free use by researchers. ISCE can process data from the ALOS, ERS, EnviSAT, Cosmo-SkyMed, RadarSAT-1, RadarSAT-2, and TerraSAR-X platforms, starting from Level-0 or Level 1 as provided from the data source, and going as far as Level 3 geocoded deformation products. With its flexible design, it can be extended with raw/meta data parsers to enable it to work with radar data from other platforms

  10. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  11. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  12. Social image quality

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  13. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  14. A community engagement process for families with children with disabilities: lessons in leadership and policy.

    PubMed

    Vargas, Claudia María; Arauza, Consuelo; Folsom, Kim; Luna, María del Rosario; Gutiérrez, Lucy; Frerking, Patricia Ohliger; Shelton, Kathleen; Foreman, Carl; Waffle, David; Reynolds, Richard; Cooper, Phillip J

    2012-01-01

    This article examines a community engagement process developed as part of leadership training for clinical trainees in the Oregon Leadership Education for Neurodevelopmental and Related Disabilities (LEND) Program in a complex community with diverse families who have children with disabilities. The goal is to examine the process and lessons learned for clinical trainees and their mentors from such a process. This is a case study conducted as community-engaged action research by participant-observers involved in the Cornelius community for the past 4 years. The authors include faculty members and clinical trainees of the Oregon LEND Program at the Oregon Health & Science University, families with children with disabilities in the community, and city officials. It is a critical case study in that it studied a community engagement process in one of the poorest communities in the region, with an unusually high population of children with disabilities, and in a community that is over half Latino residents. Lessons learned here can be helpful in a variety of settings. Community engagement forum, community engagement processes, a debriefing using a seven-element feasibility framework, and trainee evaluations are key elements. A community engagement forum is a meeting to which community members and stakeholders from pertinent agencies are invited. Community engagement processes used include a steering committee made up of, and guided by community members which meets on a regular basis to prioritize and carry out responses to problems. Trainee evaluations are based on a set of questions to trigger open-ended responses. Lessons learned are based on assessments of initial and long-term outcomes of the community engagement processes in which families, community members, local officials and LEND trainees and faculty participate as well as by trainee participant-observations, end of year evaluations and trainee debriefings at the time of the initial community assessment forum. The thesis that emerges is that community engagement processes can afford significant opportunities for clinicians in training to develop their leadership skills toward improving maternal and child health for minority families with children with disabilities while building capacity in families for advocacy and facilitating change in the community.

  15. An evaluation of new high resolution image collection and processing techniques for estimating shrub cover and detecting landscape changes associated with military training in arid lands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, D.J.; Ostler, W.K.

    2000-02-01

    Research funded by the US Department of Defense, US Department of Energy, and the US Environmental Protection Agency as part of Project CS-1131 of the Strategic Environmental Research and Development Program evaluated novel techniques for collecting high-resolution images in the Mojave Desert using helicopters, helium-filled blimps, kites, and hand-held telescoping poles at heights from 1 to 150 meters. Several camera types, lens, films, and digital techniques were evaluated on the basis of their ability to correctly estimate canopy cover of shrubs. A high degree of accuracy was obtained with photo scales of 1:4,000 or larger and flatbed scanning rates frommore » films or prints of 300 lines per inch or larger. Smaller scale images were of value in detecting retrospective changes in cover of large shrubs, but failed to detect smaller shrubs. Excellent results were obtained using inexpensive 35-millimeter cameras and new super-fine grain film such as Kodak's Royal Gold{trademark} (ASA 100) film or megapixel digital cameras. New image-processing software, such as SigmaScan Pro{trademark}, makes it possible to accurately measure areas up to 1 hectare in size for total cover and density in 10 minutes compared to several hours or days of field work. In photographs with scales of 1:1,000 and 1:2,000, it was possible to detect cover and density of up to four dominant shrub species. Canopy cover and other parameters such as width, length, feet diameter, and shape factors can be nearly instantaneously measured for each individual shrub yielding size distribution histograms and other statistical data on plant community structure. Use of the technique is being evaluated in a four-year study of military training impacts at Fort Irwin, California, and results compared with image processing using conventional aerial photography and satellite imagery, including the new 1-meter pixel IKONOS images. The technique is a valuable new emerging tool to accurately assess vegetation structure and landscape changes due to military or other land-use disturbances.« less

  16. Making Connections: The Legacy of an Intergenerational Program.

    PubMed

    Thompson, Edward H; Weaver, Andrea J

    2016-10-01

    On the face of the shrinking opportunities for children and older adults to routinely interact with one another-sometimes the result of adolescent geographies, age-segregated and gated communities, families' geographical mobility-many communities have introduced intergenerational programs within the school curriculum. For more than a decade one Massachusetts community has maintained an intergenerational program that brings fourth grade students together with older adults. The question is, does students' involvement in an intergenerational program lessened ageist beliefs 5-9 years later. A quasi-experimental research design examined the "images of aging" held by 944 students who grew up in neighboring towns and attend a regional high school. Participants completed brief questionnaire. Separate regression analyses of positive and negative images of aging-controlling for students' frequency and self-reported quality of interaction with older adults, ethnicity, age, and gender-reveal a town difference in students' positive, but not negative, images of aging. What is certain is that the high school students from one community with ongoing intergenerational programming hold a more positive image of older adults. Further research is needed to parse out exactly how short- and long-term legacy effects arise when young students have an opportunity to interact closely with older adults who are not their grandparents or neighbors. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. RETRACTED: Synthesis, characterization and catalytic activity of silver nanoparticles using Tribulus terrestris leaf extract

    NASA Astrophysics Data System (ADS)

    Ashokkumar, S.; Ravi, S.; Kathiravan, V.; Velmurugan, S.

    2014-03-01

    This article has been retracted: please see Elsevier Policy on Article Withdrawal. This article has been retracted at the request of the Editor. The article contains duplicate images (Fig. 5A and B as well as Fig. 5C and D) which differ only in magnification and orientation despite being described as different samples. Figure 3 displays duplicated data despite being described as different samples. The scientific community takes a very strong view on this scientific misbehavior and apologies are offered to readers of the journal that this was not detected during the submission process.

  18. User guide to the Magellan synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Wall, Stephen D.; Mcconnell, Shannon L.; Leff, Craig E.; Austin, Richard S.; Beratan, Kathi K.; Rokey, Mark J.

    1995-01-01

    The Magellan radar-mapping mission collected a large amount of science and engineering data. Now available to the general scientific community, this data set can be overwhelming to someone who is unfamiliar with the mission. This user guide outlines the mission operations and data set so that someone working with the data can understand the mapping and data-processing techniques used in the mission. Radar-mapping parameters as well as data acquisition issues are discussed. In addition, this user guide provides information on how the data set is organized and where specific elements of the set can be located.

  19. A View from Above Without Leaving the Ground

    NASA Technical Reports Server (NTRS)

    2004-01-01

    In order to deliver accurate geospatial data and imagery to the remote sensing community, NASA is constantly developing new image-processing algorithms while refining existing ones for technical improvement. For 8 years, the NASA Regional Applications Center at Florida International University has served as a test bed for implementing and validating many of these algorithms, helping the Space Program to fulfill its strategic and educational goals in the area of remote sensing. The algorithms in return have helped the NASA Regional Applications Center develop comprehensive semantic database systems for data management, as well as new tools for disseminating geospatial information via the Internet.

  20. ISMRM Raw data format: A proposed standard for MRI raw datasets.

    PubMed

    Inati, Souheil J; Naegele, Joseph D; Zwart, Nicholas R; Roopchansingh, Vinai; Lizak, Martin J; Hansen, David C; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E; Sørensen, Thomas S; Hansen, Michael S

    2017-01-01

    This work proposes the ISMRM Raw Data format as a common MR raw data format, which promotes algorithm and data sharing. A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using magnetic resonance imaging scanners from four vendors, converted to ISMRM Raw Data format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. The proposed raw data format solves a practical problem for the magnetic resonance imaging community. It may serve as a foundation for reproducible research and collaborations. The ISMRM Raw Data format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. Magn Reson Med 77:411-421, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

Top